Test Report: KVM_Linux_crio 22186

                    
                      5e28b85a1d78221970a3d6d4a20cdd5c3710ee83:2025-12-17:42830
                    
                

Test fail (8/424)

x
+
TestAddons/parallel/Ingress (161.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-886556 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-886556 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-886556 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [6dccff02-c09a-4293-83a1-fd22a7c40b8c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [6dccff02-c09a-4293-83a1-fd22a7c40b8c] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.004125961s
I1217 19:24:13.020919    7531 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-886556 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.587147263s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-886556 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.92
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-886556 -n addons-886556
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 logs -n 25: (1.315490806s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-238357                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-238357 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ start   │ --download-only -p binary-mirror-144298 --alsologtostderr --binary-mirror http://127.0.0.1:44329 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-144298 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	│ delete  │ -p binary-mirror-144298                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-144298 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ addons  │ disable dashboard -p addons-886556                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	│ addons  │ enable dashboard -p addons-886556                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	│ start   │ -p addons-886556 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:23 UTC │
	│ addons  │ addons-886556 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
	│ addons  │ addons-886556 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
	│ addons  │ enable headlamp -p addons-886556 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
	│ addons  │ addons-886556 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
	│ addons  │ addons-886556 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
	│ addons  │ addons-886556 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:24 UTC │
	│ ssh     │ addons-886556 ssh cat /opt/local-path-provisioner/pvc-51a5db76-42c3-423c-b2d7-c24e496695a8_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
	│ addons  │ addons-886556 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:24 UTC │
	│ ip      │ addons-886556 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
	│ addons  │ addons-886556 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
	│ addons  │ addons-886556 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:24 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-886556                                                                                                                                                                                                                                                                                                                                                                                         │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ addons  │ addons-886556 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ addons  │ addons-886556 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ addons  │ addons-886556 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ ssh     │ addons-886556 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ addons  │ addons-886556 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ addons  │ addons-886556 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ ip      │ addons-886556 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-886556        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │ 17 Dec 25 19:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:20:57
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:20:57.823805    8502 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:20:57.823894    8502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:57.823905    8502 out.go:374] Setting ErrFile to fd 2...
	I1217 19:20:57.823912    8502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:57.824114    8502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:20:57.824672    8502 out.go:368] Setting JSON to false
	I1217 19:20:57.825516    8502 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":197,"bootTime":1765999061,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:20:57.825586    8502 start.go:143] virtualization: kvm guest
	I1217 19:20:57.827588    8502 out.go:179] * [addons-886556] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:20:57.828989    8502 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:20:57.828987    8502 notify.go:221] Checking for updates...
	I1217 19:20:57.830423    8502 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:20:57.831836    8502 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:20:57.833027    8502 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:20:57.837781    8502 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:20:57.839177    8502 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:20:57.840581    8502 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:20:57.870963    8502 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 19:20:57.872099    8502 start.go:309] selected driver: kvm2
	I1217 19:20:57.872111    8502 start.go:927] validating driver "kvm2" against <nil>
	I1217 19:20:57.872128    8502 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:20:57.872827    8502 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:20:57.873031    8502 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:20:57.873056    8502 cni.go:84] Creating CNI manager for ""
	I1217 19:20:57.873092    8502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 19:20:57.873101    8502 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 19:20:57.873133    8502 start.go:353] cluster config:
	{Name:addons-886556 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1217 19:20:57.873230    8502 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:20:57.874622    8502 out.go:179] * Starting "addons-886556" primary control-plane node in "addons-886556" cluster
	I1217 19:20:57.875697    8502 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:20:57.875729    8502 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 19:20:57.875739    8502 cache.go:65] Caching tarball of preloaded images
	I1217 19:20:57.875830    8502 preload.go:238] Found /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 19:20:57.875843    8502 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 19:20:57.876160    8502 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/config.json ...
	I1217 19:20:57.876189    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/config.json: {Name:mk4dda90071125ffcf60327ec69d165b551492dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:20:57.876361    8502 start.go:360] acquireMachinesLock for addons-886556: {Name:mk03890d04d41d66ccbc23571d0f065ba20ffda0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 19:20:57.876430    8502 start.go:364] duration metric: took 54.024µs to acquireMachinesLock for "addons-886556"
	I1217 19:20:57.876455    8502 start.go:93] Provisioning new machine with config: &{Name:addons-886556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:20:57.876549    8502 start.go:125] createHost starting for "" (driver="kvm2")
	I1217 19:20:57.878048    8502 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1217 19:20:57.878207    8502 start.go:159] libmachine.API.Create for "addons-886556" (driver="kvm2")
	I1217 19:20:57.878238    8502 client.go:173] LocalClient.Create starting
	I1217 19:20:57.878315    8502 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem
	I1217 19:20:57.968368    8502 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem
	I1217 19:20:58.045418    8502 main.go:143] libmachine: creating domain...
	I1217 19:20:58.045441    8502 main.go:143] libmachine: creating network...
	I1217 19:20:58.046789    8502 main.go:143] libmachine: found existing default network
	I1217 19:20:58.047027    8502 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 19:20:58.047554    8502 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce5f60}
	I1217 19:20:58.047650    8502 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-886556</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 19:20:58.053630    8502 main.go:143] libmachine: creating private network mk-addons-886556 192.168.39.0/24...
	I1217 19:20:58.120662    8502 main.go:143] libmachine: private network mk-addons-886556 192.168.39.0/24 created
	I1217 19:20:58.120948    8502 main.go:143] libmachine: <network>
	  <name>mk-addons-886556</name>
	  <uuid>aca24f78-8089-400f-af3e-2df8ba584310</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:75:6e:c1'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 19:20:58.121007    8502 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556 ...
	I1217 19:20:58.121045    8502 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1217 19:20:58.121059    8502 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:20:58.121140    8502 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22186-3611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
	I1217 19:20:58.411320    8502 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa...
	I1217 19:20:58.479620    8502 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/addons-886556.rawdisk...
	I1217 19:20:58.479665    8502 main.go:143] libmachine: Writing magic tar header
	I1217 19:20:58.479692    8502 main.go:143] libmachine: Writing SSH key tar header
	I1217 19:20:58.479797    8502 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556 ...
	I1217 19:20:58.479877    8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556
	I1217 19:20:58.479925    8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556 (perms=drwx------)
	I1217 19:20:58.479953    8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines
	I1217 19:20:58.479972    8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines (perms=drwxr-xr-x)
	I1217 19:20:58.479990    8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:20:58.480009    8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube (perms=drwxr-xr-x)
	I1217 19:20:58.480026    8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611
	I1217 19:20:58.480043    8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611 (perms=drwxrwxr-x)
	I1217 19:20:58.480060    8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 19:20:58.480074    8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 19:20:58.480087    8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 19:20:58.480112    8502 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 19:20:58.480131    8502 main.go:143] libmachine: checking permissions on dir: /home
	I1217 19:20:58.480144    8502 main.go:143] libmachine: skipping /home - not owner
	I1217 19:20:58.480151    8502 main.go:143] libmachine: defining domain...
	I1217 19:20:58.481444    8502 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-886556</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/addons-886556.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-886556'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 19:20:58.489252    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:ee:de:94 in network default
	I1217 19:20:58.489890    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:20:58.489913    8502 main.go:143] libmachine: starting domain...
	I1217 19:20:58.489919    8502 main.go:143] libmachine: ensuring networks are active...
	I1217 19:20:58.490758    8502 main.go:143] libmachine: Ensuring network default is active
	I1217 19:20:58.491128    8502 main.go:143] libmachine: Ensuring network mk-addons-886556 is active
	I1217 19:20:58.492042    8502 main.go:143] libmachine: getting domain XML...
	I1217 19:20:58.493268    8502 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-886556</name>
	  <uuid>9d7dd346-d2b7-4fec-936f-08e6e7425367</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/addons-886556.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a0:a1:59'/>
	      <source network='mk-addons-886556'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ee:de:94'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 19:20:59.792041    8502 main.go:143] libmachine: waiting for domain to start...
	I1217 19:20:59.793336    8502 main.go:143] libmachine: domain is now running
	I1217 19:20:59.793358    8502 main.go:143] libmachine: waiting for IP...
	I1217 19:20:59.794012    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:20:59.794436    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:20:59.794450    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:20:59.794776    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:20:59.794825    8502 retry.go:31] will retry after 203.171763ms: waiting for domain to come up
	I1217 19:20:59.999183    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:20:59.999819    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:20:59.999836    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:00.000205    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:00.000257    8502 retry.go:31] will retry after 280.603302ms: waiting for domain to come up
	I1217 19:21:00.282706    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:00.283209    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:00.283222    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:00.283475    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:00.283508    8502 retry.go:31] will retry after 307.303733ms: waiting for domain to come up
	I1217 19:21:00.591871    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:00.592310    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:00.592326    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:00.592644    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:00.592686    8502 retry.go:31] will retry after 610.242195ms: waiting for domain to come up
	I1217 19:21:01.204023    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:01.204710    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:01.204727    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:01.205013    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:01.205045    8502 retry.go:31] will retry after 740.456865ms: waiting for domain to come up
	I1217 19:21:01.946747    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:01.947444    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:01.947463    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:01.947761    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:01.947803    8502 retry.go:31] will retry after 844.164568ms: waiting for domain to come up
	I1217 19:21:02.794100    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:02.794738    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:02.794757    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:02.795063    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:02.795115    8502 retry.go:31] will retry after 779.073526ms: waiting for domain to come up
	I1217 19:21:03.575927    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:03.576568    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:03.576588    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:03.576834    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:03.576865    8502 retry.go:31] will retry after 1.230149664s: waiting for domain to come up
	I1217 19:21:04.809397    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:04.810030    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:04.810047    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:04.810336    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:04.810388    8502 retry.go:31] will retry after 1.834558493s: waiting for domain to come up
	I1217 19:21:06.647381    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:06.647919    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:06.647934    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:06.648189    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:06.648218    8502 retry.go:31] will retry after 1.980010423s: waiting for domain to come up
	I1217 19:21:08.629424    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:08.630069    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:08.630090    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:08.630396    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:08.630429    8502 retry.go:31] will retry after 2.681115886s: waiting for domain to come up
	I1217 19:21:11.312827    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:11.313414    8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
	I1217 19:21:11.313430    8502 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:21:11.313777    8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
	I1217 19:21:11.313817    8502 retry.go:31] will retry after 2.507746112s: waiting for domain to come up
	I1217 19:21:13.823749    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:13.824640    8502 main.go:143] libmachine: domain addons-886556 has current primary IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:13.824665    8502 main.go:143] libmachine: found domain IP: 192.168.39.92
	I1217 19:21:13.824676    8502 main.go:143] libmachine: reserving static IP address...
	I1217 19:21:13.825231    8502 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-886556", mac: "52:54:00:a0:a1:59", ip: "192.168.39.92"} in network mk-addons-886556
	I1217 19:21:14.048396    8502 main.go:143] libmachine: reserved static IP address 192.168.39.92 for domain addons-886556
	I1217 19:21:14.048420    8502 main.go:143] libmachine: waiting for SSH...
	I1217 19:21:14.048428    8502 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 19:21:14.051179    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.051661    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:14.051695    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.051903    8502 main.go:143] libmachine: Using SSH client type: native
	I1217 19:21:14.052109    8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1217 19:21:14.052121    8502 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 19:21:14.169401    8502 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:21:14.169798    8502 main.go:143] libmachine: domain creation complete
	I1217 19:21:14.171349    8502 machine.go:94] provisionDockerMachine start ...
	I1217 19:21:14.173680    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.174091    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:14.174117    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.174331    8502 main.go:143] libmachine: Using SSH client type: native
	I1217 19:21:14.174612    8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1217 19:21:14.174624    8502 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:21:14.296862    8502 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 19:21:14.296887    8502 buildroot.go:166] provisioning hostname "addons-886556"
	I1217 19:21:14.300271    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.300797    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:14.300831    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.301020    8502 main.go:143] libmachine: Using SSH client type: native
	I1217 19:21:14.301258    8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1217 19:21:14.301271    8502 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-886556 && echo "addons-886556" | sudo tee /etc/hostname
	I1217 19:21:14.439027    8502 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-886556
	
	I1217 19:21:14.441944    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.442388    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:14.442408    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.442625    8502 main.go:143] libmachine: Using SSH client type: native
	I1217 19:21:14.442838    8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1217 19:21:14.442852    8502 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-886556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-886556/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-886556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:21:14.572842    8502 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:21:14.572868    8502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22186-3611/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-3611/.minikube}
	I1217 19:21:14.572884    8502 buildroot.go:174] setting up certificates
	I1217 19:21:14.572894    8502 provision.go:84] configureAuth start
	I1217 19:21:14.575876    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.576389    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:14.576421    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.579055    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.579501    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:14.579544    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.579716    8502 provision.go:143] copyHostCerts
	I1217 19:21:14.579805    8502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem (1082 bytes)
	I1217 19:21:14.579915    8502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem (1123 bytes)
	I1217 19:21:14.579969    8502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem (1679 bytes)
	I1217 19:21:14.580013    8502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem org=jenkins.addons-886556 san=[127.0.0.1 192.168.39.92 addons-886556 localhost minikube]
	I1217 19:21:14.648029    8502 provision.go:177] copyRemoteCerts
	I1217 19:21:14.648091    8502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:21:14.650785    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.651200    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:14.651223    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.651405    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:14.743301    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:21:14.777058    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 19:21:14.810665    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 19:21:14.844917    8502 provision.go:87] duration metric: took 272.010654ms to configureAuth
	I1217 19:21:14.844949    8502 buildroot.go:189] setting minikube options for container-runtime
	I1217 19:21:14.845167    8502 config.go:182] Loaded profile config "addons-886556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:21:14.848018    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.848486    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:14.848518    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:14.848707    8502 main.go:143] libmachine: Using SSH client type: native
	I1217 19:21:14.848902    8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1217 19:21:14.848916    8502 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:21:15.186908    8502 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:21:15.186934    8502 machine.go:97] duration metric: took 1.015568656s to provisionDockerMachine
	I1217 19:21:15.186944    8502 client.go:176] duration metric: took 17.308699397s to LocalClient.Create
	I1217 19:21:15.186960    8502 start.go:167] duration metric: took 17.308754047s to libmachine.API.Create "addons-886556"
	I1217 19:21:15.186968    8502 start.go:293] postStartSetup for "addons-886556" (driver="kvm2")
	I1217 19:21:15.186976    8502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:21:15.187049    8502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:21:15.190125    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.190549    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:15.190578    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.190755    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:15.279891    8502 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:21:15.284804    8502 info.go:137] Remote host: Buildroot 2025.02
	I1217 19:21:15.284835    8502 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/addons for local assets ...
	I1217 19:21:15.284910    8502 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/files for local assets ...
	I1217 19:21:15.284951    8502 start.go:296] duration metric: took 97.977625ms for postStartSetup
	I1217 19:21:15.289352    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.289712    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:15.289735    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.289917    8502 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/config.json ...
	I1217 19:21:15.290091    8502 start.go:128] duration metric: took 17.413531228s to createHost
	I1217 19:21:15.292224    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.292627    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:15.292653    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.292839    8502 main.go:143] libmachine: Using SSH client type: native
	I1217 19:21:15.293088    8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I1217 19:21:15.293100    8502 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 19:21:15.411819    8502 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765999275.367003414
	
	I1217 19:21:15.411852    8502 fix.go:216] guest clock: 1765999275.367003414
	I1217 19:21:15.411862    8502 fix.go:229] Guest: 2025-12-17 19:21:15.367003414 +0000 UTC Remote: 2025-12-17 19:21:15.290103157 +0000 UTC m=+17.513279926 (delta=76.900257ms)
	I1217 19:21:15.411884    8502 fix.go:200] guest clock delta is within tolerance: 76.900257ms
	I1217 19:21:15.411890    8502 start.go:83] releasing machines lock for "addons-886556", held for 17.53544805s
	I1217 19:21:15.414616    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.414966    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:15.414995    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.415585    8502 ssh_runner.go:195] Run: cat /version.json
	I1217 19:21:15.415622    8502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:21:15.418706    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.418738    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.419111    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:15.419171    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:15.419179    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.419198    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:15.419421    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:15.419430    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:15.537520    8502 ssh_runner.go:195] Run: systemctl --version
	I1217 19:21:15.544400    8502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:21:15.704034    8502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:21:15.711391    8502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:21:15.711472    8502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:21:15.735074    8502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:21:15.735111    8502 start.go:496] detecting cgroup driver to use...
	I1217 19:21:15.735187    8502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:21:15.762556    8502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:21:15.785216    8502 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:21:15.785286    8502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:21:15.804692    8502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:21:15.822494    8502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:21:15.974641    8502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:21:16.205424    8502 docker.go:234] disabling docker service ...
	I1217 19:21:16.205500    8502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:21:16.222601    8502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:21:16.238813    8502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:21:16.399827    8502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:21:16.548077    8502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:21:16.565428    8502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:21:16.589616    8502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 19:21:16.589690    8502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:21:16.603118    8502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 19:21:16.603197    8502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:21:16.617064    8502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:21:16.630781    8502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:21:16.644559    8502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:21:16.658592    8502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:21:16.671764    8502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:21:16.694194    8502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:21:16.708548    8502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:21:16.720387    8502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 19:21:16.720455    8502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 19:21:16.745604    8502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:21:16.762000    8502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:21:16.905628    8502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:21:17.039636    8502 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:21:17.039735    8502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:21:17.045593    8502 start.go:564] Will wait 60s for crictl version
	I1217 19:21:17.045685    8502 ssh_runner.go:195] Run: which crictl
	I1217 19:21:17.050292    8502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 19:21:17.088112    8502 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 19:21:17.088263    8502 ssh_runner.go:195] Run: crio --version
	I1217 19:21:17.118813    8502 ssh_runner.go:195] Run: crio --version
	I1217 19:21:17.152495    8502 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 19:21:17.156865    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:17.157285    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:17.157311    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:17.157586    8502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 19:21:17.163055    8502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:21:17.180783    8502 kubeadm.go:884] updating cluster {Name:addons-886556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:21:17.180890    8502 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:21:17.180930    8502 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:21:17.215245    8502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1217 19:21:17.215325    8502 ssh_runner.go:195] Run: which lz4
	I1217 19:21:17.220214    8502 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 19:21:17.225745    8502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 19:21:17.225789    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1217 19:21:18.544953    8502 crio.go:462] duration metric: took 1.324813392s to copy over tarball
	I1217 19:21:18.545026    8502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 19:21:20.094525    8502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.549464855s)
	I1217 19:21:20.094586    8502 crio.go:469] duration metric: took 1.549604367s to extract the tarball
	I1217 19:21:20.094594    8502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 19:21:20.131704    8502 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:21:20.170657    8502 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:21:20.170682    8502 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:21:20.170690    8502 kubeadm.go:935] updating node { 192.168.39.92 8443 v1.34.3 crio true true} ...
	I1217 19:21:20.170766    8502 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-886556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:21:20.170830    8502 ssh_runner.go:195] Run: crio config
	I1217 19:21:20.218630    8502 cni.go:84] Creating CNI manager for ""
	I1217 19:21:20.218702    8502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 19:21:20.218737    8502 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:21:20.218784    8502 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-886556 NodeName:addons-886556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:21:20.219074    8502 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-886556"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:21:20.219176    8502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 19:21:20.231145    8502 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:21:20.231200    8502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:21:20.242397    8502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1217 19:21:20.262036    8502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 19:21:20.281052    8502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1217 19:21:20.300659    8502 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I1217 19:21:20.304712    8502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:21:20.318847    8502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:21:20.463640    8502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:21:20.499965    8502 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556 for IP: 192.168.39.92
	I1217 19:21:20.499986    8502 certs.go:195] generating shared ca certs ...
	I1217 19:21:20.500000    8502 certs.go:227] acquiring lock for ca certs: {Name:mka9d751f3e3cbcb654d1f1d24f2b10b27bc58a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:20.500140    8502 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key
	I1217 19:21:20.531735    8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt ...
	I1217 19:21:20.531762    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt: {Name:mke133978246d86d25f83680d056f0becec00cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:20.531909    8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key ...
	I1217 19:21:20.531919    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key: {Name:mk3bbb3a281ad4113e29b15cfc9da235007f0c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:20.531989    8502 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key
	I1217 19:21:20.712328    8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt ...
	I1217 19:21:20.712358    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt: {Name:mk80d8a99bde89b8a4c0aed125150a55ea9e10ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:20.712506    8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key ...
	I1217 19:21:20.712516    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key: {Name:mk6af4243fb1605159c5504c82735178cd145803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:20.712602    8502 certs.go:257] generating profile certs ...
	I1217 19:21:20.712652    8502 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.key
	I1217 19:21:20.712674    8502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt with IP's: []
	I1217 19:21:20.850457    8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt ...
	I1217 19:21:20.850484    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: {Name:mk2fedc6adf0d18a3c89d248e468613ff49b6202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:20.850655    8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.key ...
	I1217 19:21:20.850667    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.key: {Name:mk9acfe2f8a697299d32b49792e0ce7628c1d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:20.850736    8502 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key.7ff6a81a
	I1217 19:21:20.850754    8502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt.7ff6a81a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92]
	I1217 19:21:20.944063    8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt.7ff6a81a ...
	I1217 19:21:20.944091    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt.7ff6a81a: {Name:mkb89ff4d4058b0e80f7486865da6036f6c35ff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:20.944265    8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key.7ff6a81a ...
	I1217 19:21:20.944278    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key.7ff6a81a: {Name:mk36d0fccdab8d7c6c0f8341e4315678b659e8b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:20.944848    8502 certs.go:382] copying /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt.7ff6a81a -> /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt
	I1217 19:21:20.944925    8502 certs.go:386] copying /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key.7ff6a81a -> /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key
	I1217 19:21:20.944975    8502 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.key
	I1217 19:21:20.944994    8502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.crt with IP's: []
	I1217 19:21:21.095027    8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.crt ...
	I1217 19:21:21.095055    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.crt: {Name:mkb7dd245a415ac8ce4cbbea9a028084ba73665c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:21.095224    8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.key ...
	I1217 19:21:21.095235    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.key: {Name:mkf2c7b59493f0a026c238b0cbf503cb32c7693f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:21.095410    8502 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:21:21.095445    8502 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:21:21.095470    8502 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:21:21.095492    8502 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem (1679 bytes)
	I1217 19:21:21.096000    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:21:21.129606    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 19:21:21.162180    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:21:21.206122    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:21:21.251678    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 19:21:21.287632    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 19:21:21.320086    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:21:21.352928    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:21:21.385635    8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:21:21.430370    8502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:21:21.452956    8502 ssh_runner.go:195] Run: openssl version
	I1217 19:21:21.460120    8502 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:21:21.473269    8502 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:21:21.486569    8502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:21:21.492757    8502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:21:21.492819    8502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:21:21.500824    8502 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:21:21.514298    8502 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 19:21:21.527929    8502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:21:21.533670    8502 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 19:21:21.533743    8502 kubeadm.go:401] StartCluster: {Name:addons-886556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:21:21.533841    8502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:21:21.533906    8502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:21:21.574990    8502 cri.go:89] found id: ""
	I1217 19:21:21.575071    8502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:21:21.588999    8502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:21:21.602433    8502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:21:21.615656    8502 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 19:21:21.615679    8502 kubeadm.go:158] found existing configuration files:
	
	I1217 19:21:21.615729    8502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 19:21:21.629370    8502 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 19:21:21.629447    8502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 19:21:21.643815    8502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 19:21:21.656233    8502 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 19:21:21.656309    8502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:21:21.669749    8502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 19:21:21.682002    8502 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 19:21:21.682068    8502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:21:21.694936    8502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 19:21:21.706616    8502 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 19:21:21.706702    8502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:21:21.720680    8502 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 19:21:21.777524    8502 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 19:21:21.777721    8502 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 19:21:21.889364    8502 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 19:21:21.889474    8502 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 19:21:21.889614    8502 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 19:21:21.906323    8502 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 19:21:21.909894    8502 out.go:252]   - Generating certificates and keys ...
	I1217 19:21:21.910012    8502 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 19:21:21.910102    8502 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 19:21:21.979615    8502 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 19:21:22.517674    8502 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 19:21:22.716263    8502 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 19:21:23.060400    8502 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 19:21:23.121724    8502 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 19:21:23.121903    8502 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-886556 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I1217 19:21:23.175921    8502 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 19:21:23.176136    8502 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-886556 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I1217 19:21:23.488972    8502 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 19:21:24.035548    8502 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 19:21:24.333932    8502 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 19:21:24.334082    8502 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 19:21:24.547294    8502 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 19:21:24.928245    8502 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 19:21:25.113392    8502 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 19:21:25.287318    8502 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 19:21:25.409006    8502 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 19:21:25.409165    8502 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 19:21:25.411437    8502 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 19:21:25.413490    8502 out.go:252]   - Booting up control plane ...
	I1217 19:21:25.413622    8502 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 19:21:25.413753    8502 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 19:21:25.414498    8502 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 19:21:25.433118    8502 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 19:21:25.433353    8502 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 19:21:25.441668    8502 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 19:21:25.442240    8502 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 19:21:25.442442    8502 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 19:21:25.626703    8502 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 19:21:25.626887    8502 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 19:21:26.627780    8502 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002565118s
	I1217 19:21:26.630773    8502 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 19:21:26.630923    8502 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.92:8443/livez
	I1217 19:21:26.631088    8502 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 19:21:26.631224    8502 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 19:21:29.964182    8502 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.336943937s
	I1217 19:21:30.877596    8502 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.251547424s
	I1217 19:21:33.626261    8502 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001629428s
	I1217 19:21:33.648707    8502 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 19:21:33.667735    8502 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 19:21:33.685733    8502 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 19:21:33.685934    8502 kubeadm.go:319] [mark-control-plane] Marking the node addons-886556 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 19:21:33.700067    8502 kubeadm.go:319] [bootstrap-token] Using token: bvjewc.pjpdbzfshg78w916
	I1217 19:21:33.701493    8502 out.go:252]   - Configuring RBAC rules ...
	I1217 19:21:33.701676    8502 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 19:21:33.713204    8502 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 19:21:33.730449    8502 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 19:21:33.734681    8502 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 19:21:33.738742    8502 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 19:21:33.742953    8502 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 19:21:34.033739    8502 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 19:21:34.503426    8502 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 19:21:35.031774    8502 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 19:21:35.032650    8502 kubeadm.go:319] 
	I1217 19:21:35.032745    8502 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 19:21:35.032756    8502 kubeadm.go:319] 
	I1217 19:21:35.032838    8502 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 19:21:35.032848    8502 kubeadm.go:319] 
	I1217 19:21:35.032897    8502 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 19:21:35.032990    8502 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 19:21:35.033051    8502 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 19:21:35.033067    8502 kubeadm.go:319] 
	I1217 19:21:35.033121    8502 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 19:21:35.033128    8502 kubeadm.go:319] 
	I1217 19:21:35.033168    8502 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 19:21:35.033172    8502 kubeadm.go:319] 
	I1217 19:21:35.033216    8502 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 19:21:35.033285    8502 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 19:21:35.033350    8502 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 19:21:35.033359    8502 kubeadm.go:319] 
	I1217 19:21:35.033445    8502 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 19:21:35.033614    8502 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 19:21:35.033635    8502 kubeadm.go:319] 
	I1217 19:21:35.033739    8502 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bvjewc.pjpdbzfshg78w916 \
	I1217 19:21:35.033869    8502 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dc326feeb8e3fcc0b2a801c12465db03b3f763bf73e8e9492b30fdc056a1ecc4 \
	I1217 19:21:35.033907    8502 kubeadm.go:319] 	--control-plane 
	I1217 19:21:35.033917    8502 kubeadm.go:319] 
	I1217 19:21:35.034021    8502 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 19:21:35.034030    8502 kubeadm.go:319] 
	I1217 19:21:35.034149    8502 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bvjewc.pjpdbzfshg78w916 \
	I1217 19:21:35.034242    8502 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dc326feeb8e3fcc0b2a801c12465db03b3f763bf73e8e9492b30fdc056a1ecc4 
	I1217 19:21:35.035642    8502 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 19:21:35.035682    8502 cni.go:84] Creating CNI manager for ""
	I1217 19:21:35.035697    8502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 19:21:35.037579    8502 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 19:21:35.038956    8502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 19:21:35.053434    8502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 19:21:35.080131    8502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 19:21:35.080209    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:35.080231    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-886556 minikube.k8s.io/updated_at=2025_12_17T19_21_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=addons-886556 minikube.k8s.io/primary=true
	I1217 19:21:35.247395    8502 ops.go:34] apiserver oom_adj: -16
	I1217 19:21:35.247512    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:35.748116    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:36.247651    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:36.747794    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:37.248201    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:37.747686    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:38.248195    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:38.747818    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:39.248596    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:39.747770    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:40.247929    8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:21:40.472819    8502 kubeadm.go:1114] duration metric: took 5.392670474s to wait for elevateKubeSystemPrivileges
	I1217 19:21:40.472860    8502 kubeadm.go:403] duration metric: took 18.93912387s to StartCluster
	I1217 19:21:40.472880    8502 settings.go:142] acquiring lock: {Name:mke3c622f98fffe95e3e848232032c1bad05dc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:40.473034    8502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:21:40.473370    8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/kubeconfig: {Name:mk319ed0207c46a4a2ae4d9b320056846508447c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:21:40.473575    8502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 19:21:40.473650    8502 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:21:40.473782    8502 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 19:21:40.473944    8502 addons.go:70] Setting yakd=true in profile "addons-886556"
	I1217 19:21:40.473933    8502 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-886556"
	I1217 19:21:40.473956    8502 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-886556"
	I1217 19:21:40.473960    8502 addons.go:70] Setting cloud-spanner=true in profile "addons-886556"
	I1217 19:21:40.473971    8502 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-886556"
	I1217 19:21:40.473990    8502 addons.go:70] Setting registry=true in profile "addons-886556"
	I1217 19:21:40.473994    8502 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-886556"
	I1217 19:21:40.474004    8502 addons.go:239] Setting addon registry=true in "addons-886556"
	I1217 19:21:40.474008    8502 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-886556"
	I1217 19:21:40.474016    8502 addons.go:70] Setting default-storageclass=true in profile "addons-886556"
	I1217 19:21:40.474023    8502 addons.go:239] Setting addon cloud-spanner=true in "addons-886556"
	I1217 19:21:40.474032    8502 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-886556"
	I1217 19:21:40.474041    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.474041    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.474045    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.474051    8502 addons.go:70] Setting registry-creds=true in profile "addons-886556"
	I1217 19:21:40.474063    8502 addons.go:239] Setting addon registry-creds=true in "addons-886556"
	I1217 19:21:40.474080    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.474107    8502 addons.go:70] Setting ingress-dns=true in profile "addons-886556"
	I1217 19:21:40.474125    8502 addons.go:239] Setting addon ingress-dns=true in "addons-886556"
	I1217 19:21:40.474153    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.474621    8502 addons.go:70] Setting inspektor-gadget=true in profile "addons-886556"
	I1217 19:21:40.474639    8502 addons.go:239] Setting addon inspektor-gadget=true in "addons-886556"
	I1217 19:21:40.474669    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.474909    8502 addons.go:70] Setting metrics-server=true in profile "addons-886556"
	I1217 19:21:40.474935    8502 addons.go:239] Setting addon metrics-server=true in "addons-886556"
	I1217 19:21:40.474963    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.475142    8502 addons.go:70] Setting storage-provisioner=true in profile "addons-886556"
	I1217 19:21:40.475160    8502 addons.go:239] Setting addon storage-provisioner=true in "addons-886556"
	I1217 19:21:40.475182    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.473948    8502 config.go:182] Loaded profile config "addons-886556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:21:40.475284    8502 addons.go:70] Setting gcp-auth=true in profile "addons-886556"
	I1217 19:21:40.475301    8502 addons.go:70] Setting volcano=true in profile "addons-886556"
	I1217 19:21:40.475315    8502 addons.go:70] Setting ingress=true in profile "addons-886556"
	I1217 19:21:40.474041    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.475328    8502 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-886556"
	I1217 19:21:40.475331    8502 addons.go:239] Setting addon ingress=true in "addons-886556"
	I1217 19:21:40.475341    8502 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-886556"
	I1217 19:21:40.475363    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.473977    8502 addons.go:239] Setting addon yakd=true in "addons-886556"
	I1217 19:21:40.475848    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.476126    8502 addons.go:70] Setting volumesnapshots=true in profile "addons-886556"
	I1217 19:21:40.476149    8502 addons.go:239] Setting addon volumesnapshots=true in "addons-886556"
	I1217 19:21:40.476178    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.474005    8502 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-886556"
	I1217 19:21:40.476376    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.475306    8502 mustload.go:66] Loading cluster: addons-886556
	I1217 19:21:40.476427    8502 out.go:179] * Verifying Kubernetes components...
	I1217 19:21:40.475319    8502 addons.go:239] Setting addon volcano=true in "addons-886556"
	I1217 19:21:40.476619    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.476639    8502 config.go:182] Loaded profile config "addons-886556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:21:40.478152    8502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:21:40.482793    8502 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 19:21:40.482809    8502 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 19:21:40.482847    8502 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 19:21:40.482843    8502 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 19:21:40.483732    8502 addons.go:239] Setting addon default-storageclass=true in "addons-886556"
	I1217 19:21:40.483780    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.484315    8502 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 19:21:40.484397    8502 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 19:21:40.484747    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 19:21:40.484403    8502 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 19:21:40.484790    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 19:21:40.484321    8502 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 19:21:40.484406    8502 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 19:21:40.484964    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 19:21:40.485105    8502 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 19:21:40.485346    8502 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-886556"
	I1217 19:21:40.485521    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:40.485976    8502 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 19:21:40.485992    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 19:21:40.486716    8502 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 19:21:40.486821    8502 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 19:21:40.487109    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 19:21:40.486724    8502 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:21:40.487271    8502 host.go:66] Checking if "addons-886556" exists ...
	W1217 19:21:40.487613    8502 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 19:21:40.487754    8502 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 19:21:40.487765    8502 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 19:21:40.487757    8502 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 19:21:40.487782    8502 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 19:21:40.488636    8502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 19:21:40.488760    8502 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:21:40.489045    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 19:21:40.487822    8502 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 19:21:40.487784    8502 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 19:21:40.489786    8502 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 19:21:40.489372    8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 19:21:40.489803    8502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 19:21:40.489815    8502 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 19:21:40.490200    8502 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 19:21:40.490210    8502 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 19:21:40.490217    8502 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 19:21:40.490305    8502 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 19:21:40.490657    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 19:21:40.491095    8502 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 19:21:40.492663    8502 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 19:21:40.492711    8502 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 19:21:40.492711    8502 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 19:21:40.492735    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 19:21:40.492663    8502 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 19:21:40.493970    8502 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 19:21:40.494001    8502 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 19:21:40.495909    8502 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 19:21:40.496003    8502 out.go:179]   - Using image docker.io/busybox:stable
	I1217 19:21:40.496061    8502 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 19:21:40.496077    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 19:21:40.496561    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.497163    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.497394    8502 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 19:21:40.497410    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 19:21:40.498135    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.498326    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.498421    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.498456    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.498462    8502 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 19:21:40.498894    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.499173    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.499206    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.499267    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.500080    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.500123    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.500137    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.500149    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.500156    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.500593    8502 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 19:21:40.500692    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.500723    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.501099    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.501451    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.501930    8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 19:21:40.501810    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.501951    8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 19:21:40.502020    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.501887    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.502745    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.503201    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.503373    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.503406    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.503552    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.503589    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.503787    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.503851    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.503879    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.503799    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.504010    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.504312    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.504659    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.504701    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.504730    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.505075    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.505101    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.505135    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.505143    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.505488    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.505515    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.505563    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.505874    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.506140    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.506178    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.506512    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.506888    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.507344    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.507378    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.507564    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.507597    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.507892    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.508049    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.508079    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.508250    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:40.508266    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:40.508288    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:40.508501    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	W1217 19:21:40.813649    8502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50364->192.168.39.92:22: read: connection reset by peer
	I1217 19:21:40.813686    8502 retry.go:31] will retry after 158.466174ms: ssh: handshake failed: read tcp 192.168.39.1:50364->192.168.39.92:22: read: connection reset by peer
	W1217 19:21:40.897865    8502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50392->192.168.39.92:22: read: connection reset by peer
	I1217 19:21:40.897894    8502 retry.go:31] will retry after 206.861546ms: ssh: handshake failed: read tcp 192.168.39.1:50392->192.168.39.92:22: read: connection reset by peer
	W1217 19:21:40.897945    8502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50406->192.168.39.92:22: read: connection reset by peer
	I1217 19:21:40.897952    8502 retry.go:31] will retry after 297.072336ms: ssh: handshake failed: read tcp 192.168.39.1:50406->192.168.39.92:22: read: connection reset by peer
	W1217 19:21:40.972836    8502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 19:21:40.972871    8502 retry.go:31] will retry after 264.316513ms: ssh: handshake failed: EOF
	I1217 19:21:41.679362    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 19:21:41.718474    8502 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 19:21:41.718538    8502 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 19:21:41.726745    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 19:21:41.730336    8502 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 19:21:41.730364    8502 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 19:21:41.735134    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 19:21:41.737175    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:21:41.792374    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 19:21:41.801330    8502 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 19:21:41.801356    8502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 19:21:41.851216    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 19:21:41.883237    8502 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 19:21:41.883265    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 19:21:41.925208    8502 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.451598746s)
	I1217 19:21:41.925277    8502 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.447094384s)
	I1217 19:21:41.925362    8502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 19:21:41.925371    8502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:21:41.973415    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 19:21:42.085214    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 19:21:42.105256    8502 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 19:21:42.105283    8502 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 19:21:42.143787    8502 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 19:21:42.143809    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 19:21:42.146900    8502 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 19:21:42.146920    8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 19:21:42.163409    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 19:21:42.237062    8502 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 19:21:42.237094    8502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 19:21:42.259793    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 19:21:42.260571    8502 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 19:21:42.260592    8502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 19:21:42.406120    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 19:21:42.408799    8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 19:21:42.408826    8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 19:21:42.429065    8502 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 19:21:42.429088    8502 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 19:21:42.509602    8502 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 19:21:42.509630    8502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 19:21:42.524293    8502 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 19:21:42.524315    8502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 19:21:42.772551    8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 19:21:42.772579    8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 19:21:42.908823    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 19:21:42.923626    8502 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 19:21:42.923648    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 19:21:42.985385    8502 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 19:21:42.985421    8502 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 19:21:43.234380    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.554978122s)
	I1217 19:21:43.267615    8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 19:21:43.267649    8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 19:21:43.283627    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 19:21:43.337602    8502 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 19:21:43.337634    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 19:21:43.658381    8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 19:21:43.658408    8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 19:21:43.834562    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 19:21:44.126762    8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 19:21:44.126788    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 19:21:44.635308    8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 19:21:44.635339    8502 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 19:21:44.993636    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.266841841s)
	I1217 19:21:45.129435    8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 19:21:45.129469    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 19:21:45.597811    8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 19:21:45.597834    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 19:21:46.022947    8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 19:21:46.022980    8502 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 19:21:46.523124    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 19:21:47.630300    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.89308497s)
	I1217 19:21:47.630354    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.837945861s)
	I1217 19:21:47.630465    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.895296003s)
	I1217 19:21:48.013967    8502 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 19:21:48.017699    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:48.018177    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:48.018209    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:48.018393    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:48.605673    8502 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 19:21:48.953080    8502 addons.go:239] Setting addon gcp-auth=true in "addons-886556"
	I1217 19:21:48.953139    8502 host.go:66] Checking if "addons-886556" exists ...
	I1217 19:21:48.955071    8502 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 19:21:48.957700    8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:48.958167    8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
	I1217 19:21:48.958200    8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
	I1217 19:21:48.958395    8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
	I1217 19:21:50.424100    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.572852063s)
	I1217 19:21:50.424136    8502 addons.go:495] Verifying addon ingress=true in "addons-886556"
	I1217 19:21:50.424156    8502 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.498766337s)
	I1217 19:21:50.424183    8502 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.49879458s)
	I1217 19:21:50.424280    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.339043218s)
	I1217 19:21:50.424356    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.260925596s)
	I1217 19:21:50.424227    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.450779052s)
	I1217 19:21:50.424415    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.164595771s)
	I1217 19:21:50.424456    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.018315091s)
	I1217 19:21:50.424480    8502 addons.go:495] Verifying addon registry=true in "addons-886556"
	I1217 19:21:50.424516    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.515663844s)
	I1217 19:21:50.424547    8502 addons.go:495] Verifying addon metrics-server=true in "addons-886556"
	I1217 19:21:50.424607    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.140944914s)
	I1217 19:21:50.424183    8502 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1217 19:21:50.425043    8502 node_ready.go:35] waiting up to 6m0s for node "addons-886556" to be "Ready" ...
	I1217 19:21:50.425732    8502 out.go:179] * Verifying ingress addon...
	I1217 19:21:50.426658    8502 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-886556 service yakd-dashboard -n yakd-dashboard
	
	I1217 19:21:50.426682    8502 out.go:179] * Verifying registry addon...
	I1217 19:21:50.428275    8502 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 19:21:50.428898    8502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 19:21:50.483977    8502 node_ready.go:49] node "addons-886556" is "Ready"
	I1217 19:21:50.484006    8502 node_ready.go:38] duration metric: took 58.936702ms for node "addons-886556" to be "Ready" ...
	I1217 19:21:50.484027    8502 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:21:50.484090    8502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:21:50.558240    8502 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 19:21:50.558259    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:50.564938    8502 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 19:21:50.564957    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:50.973795    8502 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-886556" context rescaled to 1 replicas
	I1217 19:21:50.976036    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:50.976221    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:50.984688    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.150087338s)
	W1217 19:21:50.984733    8502 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 19:21:50.984755    8502 retry.go:31] will retry after 263.9708ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 19:21:51.249819    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 19:21:51.441437    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:51.444567    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:51.831016    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.307833605s)
	I1217 19:21:51.831054    8502 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.875953123s)
	I1217 19:21:51.831066    8502 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-886556"
	I1217 19:21:51.831103    8502 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.346996092s)
	I1217 19:21:51.831121    8502 api_server.go:72] duration metric: took 11.357437459s to wait for apiserver process to appear ...
	I1217 19:21:51.831134    8502 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:21:51.831316    8502 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I1217 19:21:51.832519    8502 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 19:21:51.832554    8502 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 19:21:51.833756    8502 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 19:21:51.834606    8502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 19:21:51.834937    8502 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 19:21:51.834952    8502 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 19:21:51.878563    8502 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 19:21:51.878586    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:51.888739    8502 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I1217 19:21:51.896070    8502 api_server.go:141] control plane version: v1.34.3
	I1217 19:21:51.896104    8502 api_server.go:131] duration metric: took 64.834878ms to wait for apiserver health ...
	I1217 19:21:51.896112    8502 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:21:51.935765    8502 system_pods.go:59] 20 kube-system pods found
	I1217 19:21:51.935846    8502 system_pods.go:61] "amd-gpu-device-plugin-z6w8r" [1dbe0a3c-a1f6-46e6-beac-d8931e039819] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:21:51.935858    8502 system_pods.go:61] "coredns-66bc5c9577-bgtrc" [96c9cfe3-ccd5-4697-8f1b-a72ebef1425b] Running
	I1217 19:21:51.935866    8502 system_pods.go:61] "coredns-66bc5c9577-xndpj" [cadb243f-ae46-400c-8188-a780a9a4974f] Running
	I1217 19:21:51.935874    8502 system_pods.go:61] "csi-hostpath-attacher-0" [585eb515-b0dc-4a5e-a272-1a0541460d7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:21:51.935887    8502 system_pods.go:61] "csi-hostpath-resizer-0" [b286d59e-b1f1-43e0-95f4-45423fecf6d6] Pending
	I1217 19:21:51.935898    8502 system_pods.go:61] "csi-hostpathplugin-6fj9g" [97f5d123-7341-4ca5-9f44-39d65d8a4a4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:21:51.935911    8502 system_pods.go:61] "etcd-addons-886556" [d8286b3c-24af-4b3e-8fb6-f96c18635f73] Running
	I1217 19:21:51.935918    8502 system_pods.go:61] "kube-apiserver-addons-886556" [74777e79-dac2-44c2-9c7c-dd2f363fe062] Running
	I1217 19:21:51.935923    8502 system_pods.go:61] "kube-controller-manager-addons-886556" [cace1c52-4336-4fb0-8de2-26bd11dc3ac8] Running
	I1217 19:21:51.935935    8502 system_pods.go:61] "kube-ingress-dns-minikube" [665e2f71-8383-415a-89ea-cb281553dc9e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:21:51.935946    8502 system_pods.go:61] "kube-proxy-tmm7b" [1dcd502e-bfdd-41d4-911e-b8cb873ebb8c] Running
	I1217 19:21:51.935953    8502 system_pods.go:61] "kube-scheduler-addons-886556" [e4e24a77-0291-4ac3-a317-13537ba593ad] Running
	I1217 19:21:51.935964    8502 system_pods.go:61] "metrics-server-85b7d694d7-qq7z2" [1a0a29d5-b863-4f43-8e30-20e811421d49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:21:51.935976    8502 system_pods.go:61] "nvidia-device-plugin-daemonset-9r9hc" [687ccec9-fd49-4130-942a-adaa42174493] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:21:51.935990    8502 system_pods.go:61] "registry-6b586f9694-7vxz4" [51d280f0-5585-48ff-9878-7cdf3f790c88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:21:51.936003    8502 system_pods.go:61] "registry-creds-764b6fb674-7jdnm" [61a01fac-adbf-4010-981c-9c91b42e786e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:21:51.936016    8502 system_pods.go:61] "registry-proxy-zf2zm" [d7cb4d26-907e-4609-8385-a07e0958bd41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:21:51.936029    8502 system_pods.go:61] "snapshot-controller-7d9fbc56b8-96c6l" [6882de24-8733-4ef1-88d5-73ffcab02127] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:21:51.936046    8502 system_pods.go:61] "snapshot-controller-7d9fbc56b8-w7czp" [f4b470a5-b443-4c15-911f-8b4bc6ac894d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:21:51.936061    8502 system_pods.go:61] "storage-provisioner" [e51b534c-7297-4901-a6e7-63d89d9275dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:21:51.936073    8502 system_pods.go:74] duration metric: took 39.952611ms to wait for pod list to return data ...
	I1217 19:21:51.936089    8502 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:21:51.960460    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:51.963147    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:51.968320    8502 default_sa.go:45] found service account: "default"
	I1217 19:21:51.968351    8502 default_sa.go:55] duration metric: took 32.251173ms for default service account to be created ...
	I1217 19:21:51.968364    8502 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:21:51.981850    8502 system_pods.go:86] 20 kube-system pods found
	I1217 19:21:51.981890    8502 system_pods.go:89] "amd-gpu-device-plugin-z6w8r" [1dbe0a3c-a1f6-46e6-beac-d8931e039819] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:21:51.981914    8502 system_pods.go:89] "coredns-66bc5c9577-bgtrc" [96c9cfe3-ccd5-4697-8f1b-a72ebef1425b] Running
	I1217 19:21:51.981923    8502 system_pods.go:89] "coredns-66bc5c9577-xndpj" [cadb243f-ae46-400c-8188-a780a9a4974f] Running
	I1217 19:21:51.981930    8502 system_pods.go:89] "csi-hostpath-attacher-0" [585eb515-b0dc-4a5e-a272-1a0541460d7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:21:51.981937    8502 system_pods.go:89] "csi-hostpath-resizer-0" [b286d59e-b1f1-43e0-95f4-45423fecf6d6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 19:21:51.981947    8502 system_pods.go:89] "csi-hostpathplugin-6fj9g" [97f5d123-7341-4ca5-9f44-39d65d8a4a4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:21:51.981953    8502 system_pods.go:89] "etcd-addons-886556" [d8286b3c-24af-4b3e-8fb6-f96c18635f73] Running
	I1217 19:21:51.981962    8502 system_pods.go:89] "kube-apiserver-addons-886556" [74777e79-dac2-44c2-9c7c-dd2f363fe062] Running
	I1217 19:21:51.981971    8502 system_pods.go:89] "kube-controller-manager-addons-886556" [cace1c52-4336-4fb0-8de2-26bd11dc3ac8] Running
	I1217 19:21:51.981984    8502 system_pods.go:89] "kube-ingress-dns-minikube" [665e2f71-8383-415a-89ea-cb281553dc9e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:21:51.981994    8502 system_pods.go:89] "kube-proxy-tmm7b" [1dcd502e-bfdd-41d4-911e-b8cb873ebb8c] Running
	I1217 19:21:51.982000    8502 system_pods.go:89] "kube-scheduler-addons-886556" [e4e24a77-0291-4ac3-a317-13537ba593ad] Running
	I1217 19:21:51.982007    8502 system_pods.go:89] "metrics-server-85b7d694d7-qq7z2" [1a0a29d5-b863-4f43-8e30-20e811421d49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:21:51.982016    8502 system_pods.go:89] "nvidia-device-plugin-daemonset-9r9hc" [687ccec9-fd49-4130-942a-adaa42174493] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:21:51.982028    8502 system_pods.go:89] "registry-6b586f9694-7vxz4" [51d280f0-5585-48ff-9878-7cdf3f790c88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:21:51.982036    8502 system_pods.go:89] "registry-creds-764b6fb674-7jdnm" [61a01fac-adbf-4010-981c-9c91b42e786e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:21:51.982048    8502 system_pods.go:89] "registry-proxy-zf2zm" [d7cb4d26-907e-4609-8385-a07e0958bd41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:21:51.982057    8502 system_pods.go:89] "snapshot-controller-7d9fbc56b8-96c6l" [6882de24-8733-4ef1-88d5-73ffcab02127] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:21:51.982070    8502 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w7czp" [f4b470a5-b443-4c15-911f-8b4bc6ac894d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:21:51.982079    8502 system_pods.go:89] "storage-provisioner" [e51b534c-7297-4901-a6e7-63d89d9275dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:21:51.982093    8502 system_pods.go:126] duration metric: took 13.721224ms to wait for k8s-apps to be running ...
	I1217 19:21:51.982108    8502 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:21:51.982158    8502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:21:51.985938    8502 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 19:21:51.985963    8502 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 19:21:52.099803    8502 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 19:21:52.099832    8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 19:21:52.152451    8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 19:21:52.344609    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:52.443598    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:52.443638    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:52.840415    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:52.936447    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:52.937459    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:53.209718    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.95985204s)
	I1217 19:21:53.209741    8502 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.227562478s)
	I1217 19:21:53.209778    8502 system_svc.go:56] duration metric: took 1.227665633s WaitForService to wait for kubelet
	I1217 19:21:53.209793    8502 kubeadm.go:587] duration metric: took 12.736107872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:21:53.209819    8502 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:21:53.217219    8502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 19:21:53.217249    8502 node_conditions.go:123] node cpu capacity is 2
	I1217 19:21:53.217266    8502 node_conditions.go:105] duration metric: took 7.440359ms to run NodePressure ...
	I1217 19:21:53.217280    8502 start.go:242] waiting for startup goroutines ...
	I1217 19:21:53.359621    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:53.470918    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:53.477784    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:53.722863    8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.570354933s)
	I1217 19:21:53.724055    8502 addons.go:495] Verifying addon gcp-auth=true in "addons-886556"
	I1217 19:21:53.726959    8502 out.go:179] * Verifying gcp-auth addon...
	I1217 19:21:53.729079    8502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 19:21:53.753665    8502 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 19:21:53.753687    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:53.854963    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:53.938959    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:53.942951    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:54.234428    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:54.359344    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:54.460350    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:54.461013    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:54.733520    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:54.839864    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:54.932075    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:54.939217    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:55.233635    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:55.339202    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:55.434004    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:55.434077    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:55.733429    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:55.839443    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:55.933204    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:55.934074    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:56.237241    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:56.353238    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:56.434742    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:56.437459    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:56.745225    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:56.839658    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:56.944721    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:56.945237    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:57.234721    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:57.339871    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:57.440356    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:57.440539    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:57.733084    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:57.839989    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:57.935065    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:57.940764    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:58.239269    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:58.342930    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:58.432709    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:58.433916    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:58.735515    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:58.841631    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:58.936112    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:58.936320    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:59.234218    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:59.343279    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:59.438091    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:59.440386    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:21:59.732547    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:21:59.840134    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:21:59.933365    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:21:59.933373    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:00.233349    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:00.342425    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:00.439813    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:00.440101    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:00.734020    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:00.839469    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:00.932190    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:00.934915    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:01.232557    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:01.339359    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:01.432551    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:01.433814    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:01.734509    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:01.840710    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:01.932920    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:01.933794    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:02.233710    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:02.339557    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:02.432713    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:02.433224    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:02.732781    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:02.839392    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:02.933916    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:02.934063    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:03.232970    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:03.341994    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:03.435346    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:03.435676    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:03.734482    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:03.839926    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:03.933699    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:03.934493    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:04.234320    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:04.342220    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:04.434925    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:04.434992    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:04.733513    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:04.840999    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:04.932705    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:04.932984    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:05.253664    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:05.338808    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:05.434935    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:05.435003    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:05.733326    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:05.844396    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:05.933142    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:05.933460    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:06.234664    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:06.338667    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:06.433265    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:06.434253    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:06.733174    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:06.845961    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:06.938661    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:06.939060    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:07.235694    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:07.339013    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:07.432299    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:07.433659    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:07.733774    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:07.839003    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:07.933823    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:07.933923    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:08.234274    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:08.339969    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:08.433478    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:08.433724    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:08.733629    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:08.837948    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:08.938825    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:08.939021    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:09.232935    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:09.339058    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:09.433163    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:09.433272    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:09.733331    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:09.839735    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:09.933966    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:09.935392    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:10.236959    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:10.342728    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:10.437995    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:10.443821    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:10.734744    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:10.841991    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:10.937125    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:10.938731    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:11.237817    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:11.340551    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:11.437933    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:11.440663    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:11.736778    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:11.845115    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:11.934953    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:11.936435    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:12.238155    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:12.340565    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:12.431946    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:12.434831    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:12.737283    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:12.838913    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:12.939424    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:12.939751    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:13.238685    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:13.441085    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:13.449590    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:13.452486    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:13.736704    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:13.840293    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:13.937557    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:13.938875    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:14.235284    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:14.341809    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:14.436231    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:14.440004    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:14.733570    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:14.840735    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:15.034005    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:15.035598    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:15.234183    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:15.340704    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:15.431909    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:15.435538    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:15.736474    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:15.840515    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:15.933005    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:15.938668    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:16.233979    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:16.342114    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:16.432422    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:16.435759    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:16.735389    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:16.839762    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:16.937260    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:16.939115    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:17.238960    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:17.348944    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:17.433254    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:17.434937    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:17.736566    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:17.838369    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:17.935563    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:17.935879    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:18.233492    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:18.340626    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:18.434051    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:18.434334    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:18.732657    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:18.839133    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:18.933541    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:18.936017    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:19.233611    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:19.338855    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:19.433259    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:19.434379    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:19.732511    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:19.839705    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:19.932944    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:19.933232    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:20.232837    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:20.341244    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:20.435711    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:20.437837    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:20.736475    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:20.839724    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:20.935829    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:20.937461    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:21.234269    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:21.341469    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:21.437099    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:21.440897    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:21.735004    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:21.841716    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:21.940646    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:21.940939    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:22.232343    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:22.340129    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:22.432276    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:22.432710    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:22.734589    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:22.839238    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:22.932925    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:22.934490    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:23.232689    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:23.338508    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:23.432757    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:23.433174    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:23.734012    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:23.838707    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:23.932150    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:23.932776    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:24.233789    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:24.341026    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:24.437247    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:24.437432    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:24.734919    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:24.839464    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:24.933604    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:24.935175    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:25.234988    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:25.339402    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:25.432024    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:25.434225    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:25.736607    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:25.840739    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:25.935361    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:25.935620    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:26.234158    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:26.339875    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:26.433218    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:26.433855    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:26.735895    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:26.840603    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:26.932295    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:26.934268    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:27.235743    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:27.341244    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:27.434626    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:22:27.435665    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:27.736117    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:27.842335    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:27.934880    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:27.935803    8502 kapi.go:107] duration metric: took 37.506902072s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 19:22:28.234127    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:28.339121    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:28.434024    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:28.816605    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:28.891642    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:28.934141    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:29.232768    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:29.338330    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:29.432676    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:29.743187    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:29.843363    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:29.932432    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:30.232655    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:30.338067    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:30.432676    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:30.733553    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:30.839874    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:30.932356    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:31.232304    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:31.342822    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:31.435243    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:31.732598    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:31.842359    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:31.932837    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:32.238002    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:32.339102    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:32.435168    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:32.734015    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:32.839421    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:32.935248    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:33.236904    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:33.346595    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:33.442620    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:33.733112    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:33.840716    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:33.934050    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:34.233131    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:34.341156    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:34.434741    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:34.732995    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:34.838643    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:34.931699    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:35.233370    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:35.339623    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:35.432716    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:35.732595    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:35.839689    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:35.940232    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:36.232781    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:36.338921    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:36.432063    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:36.731700    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:36.842769    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:36.932010    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:37.234221    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:37.340400    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:37.526396    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:37.734800    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:37.839130    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:37.933439    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:38.233027    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:38.340464    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:38.432622    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:38.733191    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:38.840639    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:38.937357    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:39.233454    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:39.339701    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:39.432033    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:39.735320    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:39.840189    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:39.935291    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:40.232945    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:40.339237    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:40.434853    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:40.731862    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:40.838476    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:40.932039    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:41.231961    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:41.338863    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:41.437765    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:41.734607    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:41.838149    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:41.932687    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:42.233077    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:42.339168    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:42.432368    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:42.735585    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:42.838269    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:42.933517    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:43.235990    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:43.561818    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:43.561966    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:43.737120    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:43.838881    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:43.933152    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:44.236804    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:44.341541    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:44.434170    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:44.734185    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:44.839901    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:44.931988    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:45.233205    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:45.340500    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:45.431786    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:45.735962    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:45.840675    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:45.932856    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:46.234945    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:46.339034    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:46.433919    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:46.734783    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:46.841406    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:46.934957    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:47.238126    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:47.339038    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:47.434012    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:47.738904    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:47.839043    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:47.937312    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:48.236251    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:48.339475    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:48.433429    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:48.735000    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:48.840588    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:48.934898    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:49.234420    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:49.349421    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:49.433891    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:49.735340    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:49.839405    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:49.932911    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:50.234152    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:50.339686    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:50.432672    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:50.734482    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:50.846160    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:50.935701    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:51.232518    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:51.343432    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:51.436858    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:51.734035    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:51.842425    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:51.942305    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:52.233390    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:52.350383    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:52.436573    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:52.734604    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:52.842618    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:52.937124    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:53.569781    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:53.569895    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:53.570824    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:53.733647    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:53.842656    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:53.944145    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:54.234979    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:54.349480    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:54.433768    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:54.735417    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:54.840875    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:54.934672    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:55.238241    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:55.340475    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:55.443641    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:55.735092    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:55.842457    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:55.935969    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:56.235859    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:56.338587    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:56.433367    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:56.735074    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:56.839619    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:56.931821    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:57.233705    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:57.341845    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:57.437693    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:57.732796    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:57.839014    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:57.934645    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:58.235982    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:58.340983    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:58.435006    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:58.751880    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:59.024243    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:59.027423    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:59.235133    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:59.338729    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:59.431668    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:22:59.732998    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:22:59.841991    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:22:59.939014    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:23:00.235840    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:00.339160    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:00.432970    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:23:00.733043    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:00.838852    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:00.932938    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:23:01.233725    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:01.337949    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:01.432843    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:23:01.738044    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:01.842212    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:02.162095    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:23:02.233736    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:02.338913    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:02.434474    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:23:02.733433    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:02.840399    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:02.932380    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:23:03.236199    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:03.358437    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:03.433309    8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:23:03.733313    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:03.839945    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:03.934510    8502 kapi.go:107] duration metric: took 1m13.506231166s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 19:23:04.235692    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:04.340565    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:04.735287    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:04.840630    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:05.233565    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:05.340006    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:05.733216    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:05.839544    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:06.233813    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:06.340424    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:06.733835    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:06.847951    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:07.236307    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:07.339192    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:07.735346    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:07.841731    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:08.234209    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:08.341047    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:08.732889    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:08.839965    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:09.234331    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:23:09.341597    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:09.734074    8502 kapi.go:107] duration metric: took 1m16.004994998s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 19:23:09.735916    8502 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-886556 cluster.
	I1217 19:23:09.737437    8502 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 19:23:09.738904    8502 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 19:23:09.841191    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:10.343089    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:10.842783    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:11.341925    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:11.841220    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:12.340015    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:12.839452    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:13.341158    8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:23:13.839177    8502 kapi.go:107] duration metric: took 1m22.004570276s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 19:23:13.841161    8502 out.go:179] * Enabled addons: default-storageclass, cloud-spanner, storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, registry-creds, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1217 19:23:13.842419    8502 addons.go:530] duration metric: took 1m33.368643369s for enable addons: enabled=[default-storageclass cloud-spanner storage-provisioner amd-gpu-device-plugin inspektor-gadget ingress-dns registry-creds nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1217 19:23:13.842475    8502 start.go:247] waiting for cluster config update ...
	I1217 19:23:13.842504    8502 start.go:256] writing updated cluster config ...
	I1217 19:23:13.842825    8502 ssh_runner.go:195] Run: rm -f paused
	I1217 19:23:13.853136    8502 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:23:13.860762    8502 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xndpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:13.869825    8502 pod_ready.go:94] pod "coredns-66bc5c9577-xndpj" is "Ready"
	I1217 19:23:13.869853    8502 pod_ready.go:86] duration metric: took 9.058747ms for pod "coredns-66bc5c9577-xndpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:13.873268    8502 pod_ready.go:83] waiting for pod "etcd-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:13.881166    8502 pod_ready.go:94] pod "etcd-addons-886556" is "Ready"
	I1217 19:23:13.881199    8502 pod_ready.go:86] duration metric: took 7.898744ms for pod "etcd-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:13.884855    8502 pod_ready.go:83] waiting for pod "kube-apiserver-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:13.894692    8502 pod_ready.go:94] pod "kube-apiserver-addons-886556" is "Ready"
	I1217 19:23:13.894720    8502 pod_ready.go:86] duration metric: took 9.839755ms for pod "kube-apiserver-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:13.898037    8502 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:14.259100    8502 pod_ready.go:94] pod "kube-controller-manager-addons-886556" is "Ready"
	I1217 19:23:14.259131    8502 pod_ready.go:86] duration metric: took 361.065761ms for pod "kube-controller-manager-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:14.459320    8502 pod_ready.go:83] waiting for pod "kube-proxy-tmm7b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:14.858358    8502 pod_ready.go:94] pod "kube-proxy-tmm7b" is "Ready"
	I1217 19:23:14.858385    8502 pod_ready.go:86] duration metric: took 399.024903ms for pod "kube-proxy-tmm7b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:15.058685    8502 pod_ready.go:83] waiting for pod "kube-scheduler-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:15.458699    8502 pod_ready.go:94] pod "kube-scheduler-addons-886556" is "Ready"
	I1217 19:23:15.458728    8502 pod_ready.go:86] duration metric: took 400.011797ms for pod "kube-scheduler-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:23:15.458742    8502 pod_ready.go:40] duration metric: took 1.605568743s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:23:15.509910    8502 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 19:23:15.512545    8502 out.go:179] * Done! kubectl is now configured to use "addons-886556" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.011349817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765999588011285431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10dcc2ed-e717-4404-ab4d-f56620530ef8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.014255974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=663b610e-1c76-40a8-be12-009040d7a141 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.014635338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=663b610e-1c76-40a8-be12-009040d7a141 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.015605767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:509c2e34906e3c016d03d51db722a340d140c0ed93e7fe3c711e9850b6570161,PodSandboxId:ab87d79e904d52cec8a616a57e0a377a4236bcc9340e467029894f8b9bb3a395,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765999445844268114,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6dccff02-c09a-4293-83a1-fd22a7c40b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81ab5371b23fe766fc6be499ce17f2093c0f26ec5dae9f5758074ff01194c13b,PodSandboxId:0b7413f5092011d719adf5bd50f250a94a00c1c099a8e42118676aa95c1933e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765999400971257972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be54a14-f7e4-4cce-a350-4f3c9438f053,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bfed642b76ad03ba6be908e3f00f6e22893595b6905d9a972bcc02ec8db95c,PodSandboxId:4230ae39e001d85cb49e9c9db1deab46baa32a80d9c9b9ea791c3042fefd07e3,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999382449717188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7b4h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdae934d-e441-44d9-8be3-38eda9dbad52,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f1ae3617bb443308f1cccdabefc2860f9716c1b01e9f7982834af654e5f87f1,PodSandboxId:9ae07f54f02cef14156432bdf3b38be60352efae4fa7d61f30ea4f078d6b961e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765999382401366680,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-2lds5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a310601-c39b-42c0-a572-5471fbb24856,},Annotations:map[string]string{io.kubernetes.container.hash: 6f360
61b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfa46bd7651d15668ba7d18b6803cfff873f769b7a87dde5dccf615ecb8645e,PodSandboxId:5cb502d1fe0a1fb64e06f3134da1eb3b436887888664fee82b722f2df774fb3e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999367952728755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fg4xw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fe75cec-73ac-48c0-81b3-fc95913a3fbd,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb1564c0fd8234f5ffc6fac2d4f1840e80da9bb871b6d578d43e86aa34bbe86,PodSandboxId:58e9f5510ce02ca6ff09f198c69741f18c5b9ee30ff8d0d5f0937c8b9d654667,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765999337624421140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 665e2f71-8383-415a-89ea-cb281553dc9e,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842e06a810a24be894b68fbaa693a58ec50f0780b6754c852d8edbb419ae904e,PodSandboxId:c5947a163b040dfa0a7db2ee1530e171bcc0f8adf6afa8ad14f7a3247c4ff2e0,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765999312566436038,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z6w8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dbe0a3c-a1f6-46e6-beac-d8931e039819,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f,PodSandboxId:1dc437030c33e1f1b1c3f7446b75eb228f28ea94d1356064d5dd5f9cf7ae961c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765999311590572973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51b534c-7297-4901-a6e7-63d89d9275dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806,PodSandboxId:1bd82f4b4a856e37598b87b018b24c4eead12a4497a85917c3d6b85ac6a028a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765999301536234157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xndpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadb243f-ae46-400c-8188-a780a9a4974f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c,PodSandboxId:609664b8e6016381ee97b9d8602bb0a177dd801ecc704bd7130ad1e285a236dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765999300183927893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmm7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd502e-bfdd-41d4-911e-b8cb873ebb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3,PodSandboxId:7d557bd8b150e0672357cad152f999aa5e279782d859fed266a443cd72e9a535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765999287250536093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 883b266f381f344ce15f08d1cdc57113,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68,PodSandboxId:1c3372d0e8f698bbe6acbf4a19f230c2a81aebb86c68b93f563a057aaeb1fd45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765999287279063135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f8a4b64afdd22b1a13b05efdc91f50,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5,PodSandboxId:354185a9c4dc542ecb18d84642e4dca83747cbba64ac2bf8693e84ccc579b684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765999287231033799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a77be74558da47219e6b04daea8f969,},Annotations:
map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62,PodSandboxId:b561eebc576741724fb933d2adbb05606abb086448e25dc0a4c21240c6eda634,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765999287203307145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-886556,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 8c2aa17c88f5bbf01e49cd999fb78dc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=663b610e-1c76-40a8-be12-009040d7a141 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.055709854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78009c01-88a6-44f1-b28f-82de801d6d1d name=/runtime.v1.RuntimeService/Version
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.055849394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78009c01-88a6-44f1-b28f-82de801d6d1d name=/runtime.v1.RuntimeService/Version
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.057811704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f9a0ee6-3602-4437-b645-12cfddcd773a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.059084415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765999588059051828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f9a0ee6-3602-4437-b645-12cfddcd773a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.060348594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4654f1d0-a28d-479e-90db-15833f971e16 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.060429930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4654f1d0-a28d-479e-90db-15833f971e16 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.060807457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:509c2e34906e3c016d03d51db722a340d140c0ed93e7fe3c711e9850b6570161,PodSandboxId:ab87d79e904d52cec8a616a57e0a377a4236bcc9340e467029894f8b9bb3a395,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765999445844268114,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6dccff02-c09a-4293-83a1-fd22a7c40b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81ab5371b23fe766fc6be499ce17f2093c0f26ec5dae9f5758074ff01194c13b,PodSandboxId:0b7413f5092011d719adf5bd50f250a94a00c1c099a8e42118676aa95c1933e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765999400971257972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be54a14-f7e4-4cce-a350-4f3c9438f053,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bfed642b76ad03ba6be908e3f00f6e22893595b6905d9a972bcc02ec8db95c,PodSandboxId:4230ae39e001d85cb49e9c9db1deab46baa32a80d9c9b9ea791c3042fefd07e3,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999382449717188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7b4h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdae934d-e441-44d9-8be3-38eda9dbad52,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f1ae3617bb443308f1cccdabefc2860f9716c1b01e9f7982834af654e5f87f1,PodSandboxId:9ae07f54f02cef14156432bdf3b38be60352efae4fa7d61f30ea4f078d6b961e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765999382401366680,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-2lds5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a310601-c39b-42c0-a572-5471fbb24856,},Annotations:map[string]string{io.kubernetes.container.hash: 6f360
61b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfa46bd7651d15668ba7d18b6803cfff873f769b7a87dde5dccf615ecb8645e,PodSandboxId:5cb502d1fe0a1fb64e06f3134da1eb3b436887888664fee82b722f2df774fb3e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999367952728755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fg4xw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fe75cec-73ac-48c0-81b3-fc95913a3fbd,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb1564c0fd8234f5ffc6fac2d4f1840e80da9bb871b6d578d43e86aa34bbe86,PodSandboxId:58e9f5510ce02ca6ff09f198c69741f18c5b9ee30ff8d0d5f0937c8b9d654667,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765999337624421140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 665e2f71-8383-415a-89ea-cb281553dc9e,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842e06a810a24be894b68fbaa693a58ec50f0780b6754c852d8edbb419ae904e,PodSandboxId:c5947a163b040dfa0a7db2ee1530e171bcc0f8adf6afa8ad14f7a3247c4ff2e0,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765999312566436038,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z6w8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dbe0a3c-a1f6-46e6-beac-d8931e039819,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f,PodSandboxId:1dc437030c33e1f1b1c3f7446b75eb228f28ea94d1356064d5dd5f9cf7ae961c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765999311590572973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51b534c-7297-4901-a6e7-63d89d9275dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806,PodSandboxId:1bd82f4b4a856e37598b87b018b24c4eead12a4497a85917c3d6b85ac6a028a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765999301536234157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xndpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadb243f-ae46-400c-8188-a780a9a4974f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c,PodSandboxId:609664b8e6016381ee97b9d8602bb0a177dd801ecc704bd7130ad1e285a236dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765999300183927893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmm7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd502e-bfdd-41d4-911e-b8cb873ebb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3,PodSandboxId:7d557bd8b150e0672357cad152f999aa5e279782d859fed266a443cd72e9a535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765999287250536093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 883b266f381f344ce15f08d1cdc57113,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68,PodSandboxId:1c3372d0e8f698bbe6acbf4a19f230c2a81aebb86c68b93f563a057aaeb1fd45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765999287279063135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f8a4b64afdd22b1a13b05efdc91f50,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5,PodSandboxId:354185a9c4dc542ecb18d84642e4dca83747cbba64ac2bf8693e84ccc579b684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765999287231033799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a77be74558da47219e6b04daea8f969,},Annotations:
map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62,PodSandboxId:b561eebc576741724fb933d2adbb05606abb086448e25dc0a4c21240c6eda634,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765999287203307145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-886556,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 8c2aa17c88f5bbf01e49cd999fb78dc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4654f1d0-a28d-479e-90db-15833f971e16 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.096113728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b70f0a03-a9ca-4fdb-9c17-b98943476173 name=/runtime.v1.RuntimeService/Version
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.096275067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b70f0a03-a9ca-4fdb-9c17-b98943476173 name=/runtime.v1.RuntimeService/Version
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.098096879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50857181-ee4e-45b8-af97-1a5788290033 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.099455013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765999588099423467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50857181-ee4e-45b8-af97-1a5788290033 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.100380434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aa122d4-0279-49d2-8319-3abdf8ccc97b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.100461463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aa122d4-0279-49d2-8319-3abdf8ccc97b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.100822118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:509c2e34906e3c016d03d51db722a340d140c0ed93e7fe3c711e9850b6570161,PodSandboxId:ab87d79e904d52cec8a616a57e0a377a4236bcc9340e467029894f8b9bb3a395,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765999445844268114,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6dccff02-c09a-4293-83a1-fd22a7c40b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81ab5371b23fe766fc6be499ce17f2093c0f26ec5dae9f5758074ff01194c13b,PodSandboxId:0b7413f5092011d719adf5bd50f250a94a00c1c099a8e42118676aa95c1933e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765999400971257972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be54a14-f7e4-4cce-a350-4f3c9438f053,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bfed642b76ad03ba6be908e3f00f6e22893595b6905d9a972bcc02ec8db95c,PodSandboxId:4230ae39e001d85cb49e9c9db1deab46baa32a80d9c9b9ea791c3042fefd07e3,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999382449717188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7b4h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdae934d-e441-44d9-8be3-38eda9dbad52,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f1ae3617bb443308f1cccdabefc2860f9716c1b01e9f7982834af654e5f87f1,PodSandboxId:9ae07f54f02cef14156432bdf3b38be60352efae4fa7d61f30ea4f078d6b961e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765999382401366680,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-2lds5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a310601-c39b-42c0-a572-5471fbb24856,},Annotations:map[string]string{io.kubernetes.container.hash: 6f360
61b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfa46bd7651d15668ba7d18b6803cfff873f769b7a87dde5dccf615ecb8645e,PodSandboxId:5cb502d1fe0a1fb64e06f3134da1eb3b436887888664fee82b722f2df774fb3e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999367952728755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fg4xw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fe75cec-73ac-48c0-81b3-fc95913a3fbd,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb1564c0fd8234f5ffc6fac2d4f1840e80da9bb871b6d578d43e86aa34bbe86,PodSandboxId:58e9f5510ce02ca6ff09f198c69741f18c5b9ee30ff8d0d5f0937c8b9d654667,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765999337624421140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 665e2f71-8383-415a-89ea-cb281553dc9e,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842e06a810a24be894b68fbaa693a58ec50f0780b6754c852d8edbb419ae904e,PodSandboxId:c5947a163b040dfa0a7db2ee1530e171bcc0f8adf6afa8ad14f7a3247c4ff2e0,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765999312566436038,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z6w8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dbe0a3c-a1f6-46e6-beac-d8931e039819,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f,PodSandboxId:1dc437030c33e1f1b1c3f7446b75eb228f28ea94d1356064d5dd5f9cf7ae961c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765999311590572973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51b534c-7297-4901-a6e7-63d89d9275dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806,PodSandboxId:1bd82f4b4a856e37598b87b018b24c4eead12a4497a85917c3d6b85ac6a028a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765999301536234157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xndpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadb243f-ae46-400c-8188-a780a9a4974f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c,PodSandboxId:609664b8e6016381ee97b9d8602bb0a177dd801ecc704bd7130ad1e285a236dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765999300183927893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmm7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd502e-bfdd-41d4-911e-b8cb873ebb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3,PodSandboxId:7d557bd8b150e0672357cad152f999aa5e279782d859fed266a443cd72e9a535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765999287250536093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 883b266f381f344ce15f08d1cdc57113,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68,PodSandboxId:1c3372d0e8f698bbe6acbf4a19f230c2a81aebb86c68b93f563a057aaeb1fd45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765999287279063135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f8a4b64afdd22b1a13b05efdc91f50,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5,PodSandboxId:354185a9c4dc542ecb18d84642e4dca83747cbba64ac2bf8693e84ccc579b684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765999287231033799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a77be74558da47219e6b04daea8f969,},Annotations:
map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62,PodSandboxId:b561eebc576741724fb933d2adbb05606abb086448e25dc0a4c21240c6eda634,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765999287203307145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-886556,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 8c2aa17c88f5bbf01e49cd999fb78dc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5aa122d4-0279-49d2-8319-3abdf8ccc97b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.137257787Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53abae11-fb5b-429d-a38f-8d3f4b79f1cb name=/runtime.v1.RuntimeService/Version
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.137371299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53abae11-fb5b-429d-a38f-8d3f4b79f1cb name=/runtime.v1.RuntimeService/Version
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.139283213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=458ea727-dcc0-46ae-9172-cafad58ff08a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.141237028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765999588141152989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=458ea727-dcc0-46ae-9172-cafad58ff08a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.142886959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c7d7bcf-c6c4-490b-a2b5-bef84a5a207b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.143075069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c7d7bcf-c6c4-490b-a2b5-bef84a5a207b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.143588097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:509c2e34906e3c016d03d51db722a340d140c0ed93e7fe3c711e9850b6570161,PodSandboxId:ab87d79e904d52cec8a616a57e0a377a4236bcc9340e467029894f8b9bb3a395,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765999445844268114,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6dccff02-c09a-4293-83a1-fd22a7c40b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81ab5371b23fe766fc6be499ce17f2093c0f26ec5dae9f5758074ff01194c13b,PodSandboxId:0b7413f5092011d719adf5bd50f250a94a00c1c099a8e42118676aa95c1933e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765999400971257972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be54a14-f7e4-4cce-a350-4f3c9438f053,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bfed642b76ad03ba6be908e3f00f6e22893595b6905d9a972bcc02ec8db95c,PodSandboxId:4230ae39e001d85cb49e9c9db1deab46baa32a80d9c9b9ea791c3042fefd07e3,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999382449717188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7b4h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdae934d-e441-44d9-8be3-38eda9dbad52,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f1ae3617bb443308f1cccdabefc2860f9716c1b01e9f7982834af654e5f87f1,PodSandboxId:9ae07f54f02cef14156432bdf3b38be60352efae4fa7d61f30ea4f078d6b961e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765999382401366680,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-2lds5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a310601-c39b-42c0-a572-5471fbb24856,},Annotations:map[string]string{io.kubernetes.container.hash: 6f360
61b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfa46bd7651d15668ba7d18b6803cfff873f769b7a87dde5dccf615ecb8645e,PodSandboxId:5cb502d1fe0a1fb64e06f3134da1eb3b436887888664fee82b722f2df774fb3e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999367952728755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fg4xw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fe75cec-73ac-48c0-81b3-fc95913a3fbd,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb1564c0fd8234f5ffc6fac2d4f1840e80da9bb871b6d578d43e86aa34bbe86,PodSandboxId:58e9f5510ce02ca6ff09f198c69741f18c5b9ee30ff8d0d5f0937c8b9d654667,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765999337624421140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 665e2f71-8383-415a-89ea-cb281553dc9e,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842e06a810a24be894b68fbaa693a58ec50f0780b6754c852d8edbb419ae904e,PodSandboxId:c5947a163b040dfa0a7db2ee1530e171bcc0f8adf6afa8ad14f7a3247c4ff2e0,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765999312566436038,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z6w8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dbe0a3c-a1f6-46e6-beac-d8931e039819,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f,PodSandboxId:1dc437030c33e1f1b1c3f7446b75eb228f28ea94d1356064d5dd5f9cf7ae961c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765999311590572973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51b534c-7297-4901-a6e7-63d89d9275dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806,PodSandboxId:1bd82f4b4a856e37598b87b018b24c4eead12a4497a85917c3d6b85ac6a028a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765999301536234157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xndpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadb243f-ae46-400c-8188-a780a9a4974f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c,PodSandboxId:609664b8e6016381ee97b9d8602bb0a177dd801ecc704bd7130ad1e285a236dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765999300183927893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmm7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd502e-bfdd-41d4-911e-b8cb873ebb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3,PodSandboxId:7d557bd8b150e0672357cad152f999aa5e279782d859fed266a443cd72e9a535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765999287250536093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 883b266f381f344ce15f08d1cdc57113,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68,PodSandboxId:1c3372d0e8f698bbe6acbf4a19f230c2a81aebb86c68b93f563a057aaeb1fd45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765999287279063135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f8a4b64afdd22b1a13b05efdc91f50,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5,PodSandboxId:354185a9c4dc542ecb18d84642e4dca83747cbba64ac2bf8693e84ccc579b684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765999287231033799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a77be74558da47219e6b04daea8f969,},Annotations:
map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62,PodSandboxId:b561eebc576741724fb933d2adbb05606abb086448e25dc0a4c21240c6eda634,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765999287203307145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-886556,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 8c2aa17c88f5bbf01e49cd999fb78dc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c7d7bcf-c6c4-490b-a2b5-bef84a5a207b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	509c2e34906e3       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   ab87d79e904d5       nginx                                       default
	81ab5371b23fe       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   0b7413f509201       busybox                                     default
	83bfed642b76a       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                             3 minutes ago       Exited              patch                     2                   4230ae39e001d       ingress-nginx-admission-patch-d7b4h         ingress-nginx
	2f1ae3617bb44       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   9ae07f54f02ce       ingress-nginx-controller-85d4c799dd-2lds5   ingress-nginx
	cbfa46bd7651d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   5cb502d1fe0a1       ingress-nginx-admission-create-fg4xw        ingress-nginx
	0bb1564c0fd82       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   58e9f5510ce02       kube-ingress-dns-minikube                   kube-system
	842e06a810a24       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   c5947a163b040       amd-gpu-device-plugin-z6w8r                 kube-system
	e17df536b7e48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   1dc437030c33e       storage-provisioner                         kube-system
	08c9eb9a61ed3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   1bd82f4b4a856       coredns-66bc5c9577-xndpj                    kube-system
	f18a26473585e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                             4 minutes ago       Running             kube-proxy                0                   609664b8e6016       kube-proxy-tmm7b                            kube-system
	82e9006ec843a       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                             5 minutes ago       Running             kube-scheduler            0                   1c3372d0e8f69       kube-scheduler-addons-886556                kube-system
	f9f0548c6961a       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                             5 minutes ago       Running             kube-controller-manager   0                   7d557bd8b150e       kube-controller-manager-addons-886556       kube-system
	c5e9c28401ad7       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                             5 minutes ago       Running             kube-apiserver            0                   354185a9c4dc5       kube-apiserver-addons-886556                kube-system
	638eb74bc3cef       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             5 minutes ago       Running             etcd                      0                   b561eebc57674       etcd-addons-886556                          kube-system
	
	
	==> coredns [08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806] <==
	[INFO] 10.244.0.8:48776 - 12420 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002198445s
	[INFO] 10.244.0.8:48776 - 60972 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000475785s
	[INFO] 10.244.0.8:48776 - 18661 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000155413s
	[INFO] 10.244.0.8:48776 - 1812 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000175434s
	[INFO] 10.244.0.8:48776 - 9007 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119102s
	[INFO] 10.244.0.8:48776 - 9524 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000205124s
	[INFO] 10.244.0.8:48776 - 55124 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.002205134s
	[INFO] 10.244.0.8:51736 - 49397 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000167975s
	[INFO] 10.244.0.8:51736 - 49687 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131334s
	[INFO] 10.244.0.8:39617 - 15192 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000092479s
	[INFO] 10.244.0.8:39617 - 15414 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000152642s
	[INFO] 10.244.0.8:37064 - 42364 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064261s
	[INFO] 10.244.0.8:37064 - 42582 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000136959s
	[INFO] 10.244.0.8:59232 - 14994 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000350558s
	[INFO] 10.244.0.8:59232 - 15198 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00009259s
	[INFO] 10.244.0.23:57812 - 17927 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001093434s
	[INFO] 10.244.0.23:46138 - 28032 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001655064s
	[INFO] 10.244.0.23:42761 - 56883 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139257s
	[INFO] 10.244.0.23:57580 - 52478 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000225017s
	[INFO] 10.244.0.23:44076 - 34964 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127418s
	[INFO] 10.244.0.23:59976 - 63156 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088725s
	[INFO] 10.244.0.23:45897 - 19764 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.005364575s
	[INFO] 10.244.0.23:56748 - 25660 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.007006785s
	[INFO] 10.244.0.28:52934 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001485295s
	[INFO] 10.244.0.28:60918 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154725s
	
	
	==> describe nodes <==
	Name:               addons-886556
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-886556
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=addons-886556
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_21_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-886556
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:21:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-886556
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 19:26:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 19:24:37 +0000   Wed, 17 Dec 2025 19:21:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 19:24:37 +0000   Wed, 17 Dec 2025 19:21:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 19:24:37 +0000   Wed, 17 Dec 2025 19:21:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 19:24:37 +0000   Wed, 17 Dec 2025 19:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    addons-886556
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d7dd346d2b74fec936f08e6e7425367
	  System UUID:                9d7dd346-d2b7-4fec-936f-08e6e7425367
	  Boot ID:                    b6c6afb0-3cd2-4306-8040-20d6fd16da45
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	  default                     hello-world-app-5d498dc89-55zvp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-2lds5    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m39s
	  kube-system                 amd-gpu-device-plugin-z6w8r                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 coredns-66bc5c9577-xndpj                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m49s
	  kube-system                 etcd-addons-886556                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m54s
	  kube-system                 kube-apiserver-addons-886556                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-controller-manager-addons-886556        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-proxy-tmm7b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-scheduler-addons-886556                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node addons-886556 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node addons-886556 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node addons-886556 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m54s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m54s                kubelet          Node addons-886556 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s                kubelet          Node addons-886556 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s                kubelet          Node addons-886556 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m53s                kubelet          Node addons-886556 status is now: NodeReady
	  Normal  RegisteredNode           4m50s                node-controller  Node addons-886556 event: Registered Node addons-886556 in Controller
	
	
	==> dmesg <==
	[  +0.446890] kauditd_printk_skb: 284 callbacks suppressed
	[  +2.107774] kauditd_printk_skb: 428 callbacks suppressed
	[Dec17 19:22] kauditd_printk_skb: 53 callbacks suppressed
	[ +10.081555] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.046903] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.390511] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.080651] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.066452] kauditd_printk_skb: 131 callbacks suppressed
	[  +2.514288] kauditd_printk_skb: 77 callbacks suppressed
	[  +1.692741] kauditd_printk_skb: 124 callbacks suppressed
	[Dec17 19:23] kauditd_printk_skb: 46 callbacks suppressed
	[  +3.935200] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.084297] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.518929] kauditd_printk_skb: 38 callbacks suppressed
	[ +10.595753] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000066] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.389164] kauditd_printk_skb: 89 callbacks suppressed
	[  +0.915229] kauditd_printk_skb: 81 callbacks suppressed
	[  +1.233366] kauditd_printk_skb: 85 callbacks suppressed
	[  +0.062258] kauditd_printk_skb: 194 callbacks suppressed
	[Dec17 19:24] kauditd_printk_skb: 60 callbacks suppressed
	[  +3.982783] kauditd_printk_skb: 88 callbacks suppressed
	[  +9.766294] kauditd_printk_skb: 42 callbacks suppressed
	[  +7.886932] kauditd_printk_skb: 61 callbacks suppressed
	[Dec17 19:26] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62] <==
	{"level":"info","ts":"2025-12-17T19:22:53.561485Z","caller":"traceutil/trace.go:172","msg":"trace[1121628281] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1098; }","duration":"134.116774ms","start":"2025-12-17T19:22:53.427362Z","end":"2025-12-17T19:22:53.561479Z","steps":["trace[1121628281] 'range keys from in-memory index tree'  (duration: 134.024496ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:22:59.014982Z","caller":"traceutil/trace.go:172","msg":"trace[1399873484] linearizableReadLoop","detail":"{readStateIndex:1168; appliedIndex:1168; }","duration":"179.817693ms","start":"2025-12-17T19:22:58.835146Z","end":"2025-12-17T19:22:59.014964Z","steps":["trace[1399873484] 'read index received'  (duration: 179.812165ms)","trace[1399873484] 'applied index is now lower than readState.Index'  (duration: 4.703µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:22:59.017165Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.003961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T19:22:59.019712Z","caller":"traceutil/trace.go:172","msg":"trace[476973923] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"184.559038ms","start":"2025-12-17T19:22:58.835142Z","end":"2025-12-17T19:22:59.019701Z","steps":["trace[476973923] 'agreement among raft nodes before linearized reading'  (duration: 180.019514ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:22:59.017798Z","caller":"traceutil/trace.go:172","msg":"trace[1135835113] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"270.861499ms","start":"2025-12-17T19:22:58.746925Z","end":"2025-12-17T19:22:59.017786Z","steps":["trace[1135835113] 'process raft request'  (duration: 268.231019ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:23:02.154531Z","caller":"traceutil/trace.go:172","msg":"trace[384274761] linearizableReadLoop","detail":"{readStateIndex:1172; appliedIndex:1172; }","duration":"225.927536ms","start":"2025-12-17T19:23:01.928575Z","end":"2025-12-17T19:23:02.154503Z","steps":["trace[384274761] 'read index received'  (duration: 225.922196ms)","trace[384274761] 'applied index is now lower than readState.Index'  (duration: 4.751µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:23:02.154622Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.034001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T19:23:02.154639Z","caller":"traceutil/trace.go:172","msg":"trace[1825000597] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1139; }","duration":"226.067346ms","start":"2025-12-17T19:23:01.928566Z","end":"2025-12-17T19:23:02.154634Z","steps":["trace[1825000597] 'agreement among raft nodes before linearized reading'  (duration: 226.005378ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:23:02.155153Z","caller":"traceutil/trace.go:172","msg":"trace[1256016360] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"250.04643ms","start":"2025-12-17T19:23:01.905091Z","end":"2025-12-17T19:23:02.155138Z","steps":["trace[1256016360] 'process raft request'  (duration: 249.956849ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:23:18.444695Z","caller":"traceutil/trace.go:172","msg":"trace[771687873] linearizableReadLoop","detail":"{readStateIndex:1281; appliedIndex:1281; }","duration":"154.522948ms","start":"2025-12-17T19:23:18.290097Z","end":"2025-12-17T19:23:18.444620Z","steps":["trace[771687873] 'read index received'  (duration: 154.486162ms)","trace[771687873] 'applied index is now lower than readState.Index'  (duration: 35.914µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T19:23:18.444835Z","caller":"traceutil/trace.go:172","msg":"trace[1495691571] transaction","detail":"{read_only:false; response_revision:1246; number_of_response:1; }","duration":"232.589009ms","start":"2025-12-17T19:23:18.212234Z","end":"2025-12-17T19:23:18.444823Z","steps":["trace[1495691571] 'process raft request'  (duration: 232.499756ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:23:18.444924Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.805607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-12-17T19:23:18.444950Z","caller":"traceutil/trace.go:172","msg":"trace[642985539] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1246; }","duration":"154.85149ms","start":"2025-12-17T19:23:18.290093Z","end":"2025-12-17T19:23:18.444944Z","steps":["trace[642985539] 'agreement among raft nodes before linearized reading'  (duration: 154.734082ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:23:18.445246Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.98814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T19:23:18.445293Z","caller":"traceutil/trace.go:172","msg":"trace[254387305] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:1246; }","duration":"149.039601ms","start":"2025-12-17T19:23:18.296247Z","end":"2025-12-17T19:23:18.445286Z","steps":["trace[254387305] 'agreement among raft nodes before linearized reading'  (duration: 148.971029ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:23:46.803527Z","caller":"traceutil/trace.go:172","msg":"trace[1904232902] transaction","detail":"{read_only:false; response_revision:1413; number_of_response:1; }","duration":"155.554248ms","start":"2025-12-17T19:23:46.647957Z","end":"2025-12-17T19:23:46.803511Z","steps":["trace[1904232902] 'process raft request'  (duration: 155.420257ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:23:50.750271Z","caller":"traceutil/trace.go:172","msg":"trace[414784047] linearizableReadLoop","detail":"{readStateIndex:1480; appliedIndex:1480; }","duration":"250.387736ms","start":"2025-12-17T19:23:50.499864Z","end":"2025-12-17T19:23:50.750252Z","steps":["trace[414784047] 'read index received'  (duration: 250.381197ms)","trace[414784047] 'applied index is now lower than readState.Index'  (duration: 5.627µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:23:50.750398Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"250.516345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T19:23:50.750417Z","caller":"traceutil/trace.go:172","msg":"trace[358167303] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1434; }","duration":"250.550649ms","start":"2025-12-17T19:23:50.499860Z","end":"2025-12-17T19:23:50.750411Z","steps":["trace[358167303] 'agreement among raft nodes before linearized reading'  (duration: 250.486968ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:23:50.750572Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"249.450773ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T19:23:50.750611Z","caller":"traceutil/trace.go:172","msg":"trace[1085933812] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1435; }","duration":"249.496588ms","start":"2025-12-17T19:23:50.501107Z","end":"2025-12-17T19:23:50.750603Z","steps":["trace[1085933812] 'agreement among raft nodes before linearized reading'  (duration: 249.433568ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:23:50.751261Z","caller":"traceutil/trace.go:172","msg":"trace[1160846200] transaction","detail":"{read_only:false; response_revision:1435; number_of_response:1; }","duration":"327.763968ms","start":"2025-12-17T19:23:50.423474Z","end":"2025-12-17T19:23:50.751238Z","steps":["trace[1160846200] 'process raft request'  (duration: 326.970805ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:23:50.751519Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:23:50.423452Z","time spent":"327.968197ms","remote":"127.0.0.1:41306","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1412 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2025-12-17T19:23:50.756562Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.707287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:1 size:2270"}
	{"level":"info","ts":"2025-12-17T19:23:50.756597Z","caller":"traceutil/trace.go:172","msg":"trace[1962334978] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1435; }","duration":"111.747082ms","start":"2025-12-17T19:23:50.644841Z","end":"2025-12-17T19:23:50.756588Z","steps":["trace[1962334978] 'agreement among raft nodes before linearized reading'  (duration: 107.056957ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:26:28 up 5 min,  0 users,  load average: 0.51, 1.10, 0.61
	Linux addons-886556 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5] <==
	E1217 19:22:32.083813       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.247.210:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.247.210:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.247.210:443: connect: connection refused" logger="UnhandledError"
	E1217 19:22:32.105008       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.247.210:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.247.210:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.247.210:443: connect: connection refused" logger="UnhandledError"
	I1217 19:22:32.240207       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 19:23:28.334340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.92:8443->192.168.39.1:45636: use of closed network connection
	E1217 19:23:28.555124       1 conn.go:339] Error on socket receive: read tcp 192.168.39.92:8443->192.168.39.1:45654: use of closed network connection
	I1217 19:23:38.094360       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.3.250"}
	I1217 19:23:57.754147       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 19:23:58.000830       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.25.57"}
	E1217 19:24:10.950297       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1217 19:24:14.787417       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1217 19:24:33.098033       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1217 19:24:38.967635       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 19:24:38.969917       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 19:24:39.000829       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 19:24:39.000925       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 19:24:39.012588       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 19:24:39.012778       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 19:24:39.056804       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 19:24:39.056858       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 19:24:39.212056       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 19:24:39.212180       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1217 19:24:40.000923       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1217 19:24:40.212395       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1217 19:24:40.218257       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1217 19:26:26.884114       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.26.190"}
	
	
	==> kube-controller-manager [f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3] <==
	E1217 19:24:49.152552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 19:24:49.732270       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:24:49.733358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 19:24:56.632903       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:24:56.634197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 19:24:58.686121       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:24:58.687551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 19:24:59.821019       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:24:59.822124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1217 19:25:09.062631       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 19:25:09.062740       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:25:09.175548       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 19:25:09.175616       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 19:25:19.562808       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:25:19.563974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 19:25:19.724266       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:25:19.725476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 19:25:20.239756       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:25:20.240801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 19:25:52.527854       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:25:52.529036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 19:25:56.543386       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:25:56.544483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 19:26:07.936474       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 19:26:07.938044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c] <==
	I1217 19:21:40.627007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 19:21:40.728238       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 19:21:40.728290       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.92"]
	E1217 19:21:40.728396       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 19:21:40.845068       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 19:21:40.845176       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 19:21:40.845220       1 server_linux.go:132] "Using iptables Proxier"
	I1217 19:21:40.884366       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 19:21:40.888010       1 server.go:527] "Version info" version="v1.34.3"
	I1217 19:21:40.888238       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:21:40.905736       1 config.go:200] "Starting service config controller"
	I1217 19:21:40.905755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 19:21:40.905778       1 config.go:106] "Starting endpoint slice config controller"
	I1217 19:21:40.905782       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 19:21:40.905792       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 19:21:40.905796       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 19:21:40.906586       1 config.go:309] "Starting node config controller"
	I1217 19:21:40.906594       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 19:21:40.906599       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 19:21:41.006109       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 19:21:41.006223       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 19:21:41.006237       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68] <==
	E1217 19:21:30.858295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 19:21:30.858367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 19:21:30.858495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 19:21:30.858554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 19:21:30.858616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 19:21:31.696323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 19:21:31.705363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 19:21:31.705442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 19:21:31.793970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 19:21:31.793991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 19:21:31.814385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 19:21:31.815489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 19:21:31.847535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 19:21:31.899782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 19:21:31.903166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 19:21:31.962092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 19:21:32.028925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 19:21:32.052298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 19:21:32.078882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 19:21:32.081088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 19:21:32.084236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 19:21:32.112692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 19:21:32.165808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 19:21:32.175747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1217 19:21:34.050463       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 19:24:44 addons-886556 kubelet[1509]: E1217 19:24:44.638449    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999484638119309  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:24:44 addons-886556 kubelet[1509]: E1217 19:24:44.638469    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999484638119309  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:24:54 addons-886556 kubelet[1509]: E1217 19:24:54.640381    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999494640190049  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:24:54 addons-886556 kubelet[1509]: E1217 19:24:54.640403    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999494640190049  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:04 addons-886556 kubelet[1509]: E1217 19:25:04.643485    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999504643145179  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:04 addons-886556 kubelet[1509]: E1217 19:25:04.643529    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999504643145179  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:14 addons-886556 kubelet[1509]: E1217 19:25:14.647232    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999514646404141  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:14 addons-886556 kubelet[1509]: E1217 19:25:14.647511    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999514646404141  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:24 addons-886556 kubelet[1509]: E1217 19:25:24.651182    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999524649444236  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:24 addons-886556 kubelet[1509]: E1217 19:25:24.651232    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999524649444236  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:34 addons-886556 kubelet[1509]: E1217 19:25:34.654593    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999534654290412  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:34 addons-886556 kubelet[1509]: E1217 19:25:34.654638    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999534654290412  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:38 addons-886556 kubelet[1509]: I1217 19:25:38.389962    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 19:25:44 addons-886556 kubelet[1509]: E1217 19:25:44.657599    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999544657294879  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:44 addons-886556 kubelet[1509]: E1217 19:25:44.657621    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999544657294879  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:48 addons-886556 kubelet[1509]: I1217 19:25:48.390372    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-z6w8r" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 19:25:54 addons-886556 kubelet[1509]: E1217 19:25:54.659750    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999554659405880  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:25:54 addons-886556 kubelet[1509]: E1217 19:25:54.659795    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999554659405880  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:26:04 addons-886556 kubelet[1509]: E1217 19:26:04.662931    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999564662377859  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:26:04 addons-886556 kubelet[1509]: E1217 19:26:04.662956    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999564662377859  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:26:14 addons-886556 kubelet[1509]: E1217 19:26:14.667908    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999574665896368  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:26:14 addons-886556 kubelet[1509]: E1217 19:26:14.668763    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999574665896368  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:26:24 addons-886556 kubelet[1509]: E1217 19:26:24.671834    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999584671075779  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:26:24 addons-886556 kubelet[1509]: E1217 19:26:24.671881    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999584671075779  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 19:26:26 addons-886556 kubelet[1509]: I1217 19:26:26.958045    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8js\" (UniqueName: \"kubernetes.io/projected/9f819028-eb2e-4a6b-b5a0-aec761ac06d4-kube-api-access-wc8js\") pod \"hello-world-app-5d498dc89-55zvp\" (UID: \"9f819028-eb2e-4a6b-b5a0-aec761ac06d4\") " pod="default/hello-world-app-5d498dc89-55zvp"
	
	
	==> storage-provisioner [e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f] <==
	W1217 19:26:03.876639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:05.880882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:05.890394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:07.894756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:07.900611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:09.904915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:09.914423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:11.918729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:11.925081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:13.929939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:13.939202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:15.943499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:15.949593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:17.954044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:17.962078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:19.966377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:19.972374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:21.976745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:21.985975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:23.989977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:23.995862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:26.000222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:26.006838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:28.012747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:28.021239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-886556 -n addons-886556
helpers_test.go:270: (dbg) Run:  kubectl --context addons-886556 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-55zvp ingress-nginx-admission-create-fg4xw ingress-nginx-admission-patch-d7b4h
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-886556 describe pod hello-world-app-5d498dc89-55zvp ingress-nginx-admission-create-fg4xw ingress-nginx-admission-patch-d7b4h
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-886556 describe pod hello-world-app-5d498dc89-55zvp ingress-nginx-admission-create-fg4xw ingress-nginx-admission-patch-d7b4h: exit status 1 (74.638137ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-55zvp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-886556/192.168.39.92
	Start Time:       Wed, 17 Dec 2025 19:26:26 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wc8js (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wc8js:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-55zvp to addons-886556
	  Normal  Pulling    2s    kubelet            spec.containers{hello-world-app}: Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fg4xw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d7b4h" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-886556 describe pod hello-world-app-5d498dc89-55zvp ingress-nginx-admission-create-fg4xw ingress-nginx-admission-patch-d7b4h: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable ingress-dns --alsologtostderr -v=1: (1.579401963s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable ingress --alsologtostderr -v=1: (7.834819554s)
--- FAIL: TestAddons/parallel/Ingress (161.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image rm kicbase/echo-server:functional-345985 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 image rm kicbase/echo-server:functional-345985 --alsologtostderr: (3.125630635s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls
functional_test.go:418: expected "kicbase/echo-server:functional-345985" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (3.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (2.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image rm kicbase/echo-server:functional-841762 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 image rm kicbase/echo-server:functional-841762 --alsologtostderr: (2.767675661s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image ls
functional_test.go:418: expected "kicbase/echo-server:functional-841762" to be removed from minikube but still exists
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (2.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1217 19:35:25.933673   16651 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:35:25.933791   16651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:35:25.933801   16651 out.go:374] Setting ErrFile to fd 2...
	I1217 19:35:25.933808   16651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:35:25.934117   16651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:35:25.934952   16651 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:35:25.935097   16651 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:35:25.937642   16651 ssh_runner.go:195] Run: systemctl --version
	I1217 19:35:25.940207   16651 main.go:143] libmachine: domain functional-841762 has defined MAC address 52:54:00:12:95:31 in network mk-functional-841762
	I1217 19:35:25.940659   16651 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:95:31", ip: ""} in network mk-functional-841762: {Iface:virbr1 ExpiryTime:2025-12-17 20:32:49 +0000 UTC Type:0 Mac:52:54:00:12:95:31 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:functional-841762 Clientid:01:52:54:00:12:95:31}
	I1217 19:35:25.940696   16651 main.go:143] libmachine: domain functional-841762 has defined IP address 192.168.39.238 and MAC address 52:54:00:12:95:31 in network mk-functional-841762
	I1217 19:35:25.940860   16651 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-841762/id_rsa Username:docker}
	I1217 19:35:26.048387   16651 cache_images.go:291] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar
	I1217 19:35:26.048511   16651 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/echo-server-save.tar
	I1217 19:35:26.065146   16651 ssh_runner.go:362] scp /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --> /var/lib/minikube/images/echo-server-save.tar (4950016 bytes)
	I1217 19:35:26.344868   16651 crio.go:275] Loading image: /var/lib/minikube/images/echo-server-save.tar
	I1217 19:35:26.344999   16651 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/echo-server-save.tar
	W1217 19:35:26.772336   16651 cache_images.go:255] Failed to load cached images for "functional-841762": loading images: CRI-O load /var/lib/minikube/images/echo-server-save.tar: crio load image: sudo podman load -i /var/lib/minikube/images/echo-server-save.tar: Process exited with status 125
	stdout:
	
	stderr:
	Getting image source signatures
	Copying blob sha256:385288f36387f526d4826ab7d5cf1ab0e58bb5684a8257e8d19d9da3773b85da
	Copying config sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
	Writing manifest to image destination
	Storing signatures
	Error: payload does not match any of the supported image formats (oci, oci-archive, dir, docker-archive)
	I1217 19:35:26.772371   16651 cache_images.go:267] failed pushing to: functional-841762

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (7.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-841762
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image save --daemon kicbase/echo-server:functional-841762 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 image save --daemon kicbase/echo-server:functional-841762 --alsologtostderr: (7.543333075s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-841762
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-841762: exit status 1 (18.897152ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-841762

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-841762

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (7.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-839174 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-839174 ssh -- findmnt --json /minikube-host
mount_start_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 -p mount-start-1-839174 ssh -- findmnt --json /minikube-host: exit status 1 (144.296121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
mount_start_test.go:149: command failed "out/minikube-linux-amd64 -p mount-start-1-839174 ssh -- findmnt --json /minikube-host": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p mount-start-1-839174 -n mount-start-1-839174
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p mount-start-1-839174 -n mount-start-1-839174: exit status 6 (186.56361ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 19:59:49.223134   26864 status.go:458] kubeconfig endpoint: get endpoint: "mount-start-1-839174" does not appear in /home/jenkins/minikube-integration/22186-3611/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestMountStart/serial/VerifyMountFirst FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-839174 logs -n 25
helpers_test.go:261: TestMountStart/serial/VerifyMountFirst logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                 ARGS                                                                                                                  │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-759753 node start m02 --alsologtostderr -v 5                                                                                                                                                                                       │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:42 UTC │ 17 Dec 25 19:42 UTC │
	│ node    │ ha-759753 node list --alsologtostderr -v 5                                                                                                                                                                                            │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:42 UTC │                     │
	│ stop    │ ha-759753 stop --alsologtostderr -v 5                                                                                                                                                                                                 │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:42 UTC │ 17 Dec 25 19:47 UTC │
	│ start   │ ha-759753 start --wait true --alsologtostderr -v 5                                                                                                                                                                                    │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:47 UTC │ 17 Dec 25 19:49 UTC │
	│ node    │ ha-759753 node list --alsologtostderr -v 5                                                                                                                                                                                            │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:49 UTC │                     │
	│ node    │ ha-759753 node delete m03 --alsologtostderr -v 5                                                                                                                                                                                      │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:49 UTC │ 17 Dec 25 19:49 UTC │
	│ stop    │ ha-759753 stop --alsologtostderr -v 5                                                                                                                                                                                                 │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:49 UTC │ 17 Dec 25 19:53 UTC │
	│ start   │ ha-759753 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio                                                                                                                                            │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:53 UTC │ 17 Dec 25 19:55 UTC │
	│ node    │ ha-759753 node add --control-plane --alsologtostderr -v 5                                                                                                                                                                             │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:55 UTC │ 17 Dec 25 19:56 UTC │
	│ delete  │ -p ha-759753                                                                                                                                                                                                                          │ ha-759753                │ jenkins  │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ start   │ -p json-output-687739 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio                                                                                                                 │ json-output-687739       │ testUser │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:57 UTC │
	│ pause   │ -p json-output-687739 --output=json --user=testUser                                                                                                                                                                                   │ json-output-687739       │ testUser │ v1.37.0 │ 17 Dec 25 19:57 UTC │ 17 Dec 25 19:57 UTC │
	│ unpause │ -p json-output-687739 --output=json --user=testUser                                                                                                                                                                                   │ json-output-687739       │ testUser │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ stop    │ -p json-output-687739 --output=json --user=testUser                                                                                                                                                                                   │ json-output-687739       │ testUser │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ delete  │ -p json-output-687739                                                                                                                                                                                                                 │ json-output-687739       │ jenkins  │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p json-output-error-906228 --memory=3072 --output=json --wait=true --driver=fail                                                                                                                                                     │ json-output-error-906228 │ jenkins  │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ delete  │ -p json-output-error-906228                                                                                                                                                                                                           │ json-output-error-906228 │ jenkins  │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p first-933941 --driver=kvm2  --container-runtime=crio                                                                                                                                                                               │ first-933941             │ jenkins  │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p second-939850 --driver=kvm2  --container-runtime=crio                                                                                                                                                                              │ second-939850            │ jenkins  │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p second-939850                                                                                                                                                                                                                      │ second-939850            │ jenkins  │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p first-933941                                                                                                                                                                                                                       │ first-933941             │ jenkins  │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ start   │ -p mount-start-1-839174 --memory=3072 --mount-string /tmp/TestMountStartserial1507744773/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio │ mount-start-1-839174     │ jenkins  │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ mount   │ /tmp/TestMountStartserial1507744773/001:/minikube-host --profile mount-start-1-839174 --v 0 --9p-version 9p2000.L --gid 0 --ip  --msize 6543 --port 46464 --type 9p --uid 0                                                           │ mount-start-1-839174     │ jenkins  │ v1.37.0 │ 17 Dec 25 19:59 UTC │                     │
	│ ssh     │ mount-start-1-839174 ssh -- ls /minikube-host                                                                                                                                                                                         │ mount-start-1-839174     │ jenkins  │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ mount-start-1-839174 ssh -- findmnt --json /minikube-host                                                                                                                                                                             │ mount-start-1-839174     │ jenkins  │ v1.37.0 │ 17 Dec 25 19:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:59:28
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:59:28.212617   26621 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:59:28.212716   26621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:59:28.212720   26621 out.go:374] Setting ErrFile to fd 2...
	I1217 19:59:28.212724   26621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:59:28.212893   26621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:59:28.213308   26621 out.go:368] Setting JSON to false
	I1217 19:59:28.214100   26621 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2507,"bootTime":1765999061,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:59:28.214142   26621 start.go:143] virtualization: kvm guest
	I1217 19:59:28.216175   26621 out.go:179] * [mount-start-1-839174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:59:28.217194   26621 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:59:28.217239   26621 notify.go:221] Checking for updates...
	I1217 19:59:28.218863   26621 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:59:28.220643   26621 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:59:28.221778   26621 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:59:28.222743   26621 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:59:28.223649   26621 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:59:28.224684   26621 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1217 19:59:28.224738   26621 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:59:28.255891   26621 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 19:59:28.256784   26621 start.go:309] selected driver: kvm2
	I1217 19:59:28.256791   26621 start.go:927] validating driver "kvm2" against <nil>
	I1217 19:59:28.256803   26621 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:59:28.257681   26621 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1217 19:59:28.257741   26621 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:59:28.257997   26621 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:59:28.258016   26621 cni.go:84] Creating CNI manager for ""
	I1217 19:59:28.258103   26621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 19:59:28.258109   26621 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 19:59:28.258121   26621 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1217 19:59:28.258166   26621 start.go:353] cluster config:
	{Name:mount-start-1-839174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:mount-start-1-839174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/tmp/TestMountStartserial1507744773/001:/minikube-host Mount9PVersion:9p2000.L MountGID:0 MountIP: MountMSize:6543 MountOptions:[] MountPort:46464 MountType:9p MountUID:0 BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I1217 19:59:28.258268   26621 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:28.259555   26621 out.go:179] * Starting minikube without Kubernetes in cluster mount-start-1-839174
	I1217 19:59:28.260574   26621 cache.go:59] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1217 19:59:28.260849   26621 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/mount-start-1-839174/config.json ...
	I1217 19:59:28.260869   26621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/mount-start-1-839174/config.json: {Name:mkd80552680baccab6245bd5235e87d476674369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:28.260997   26621 start.go:360] acquireMachinesLock for mount-start-1-839174: {Name:mk03890d04d41d66ccbc23571d0f065ba20ffda0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 19:59:28.261020   26621 start.go:364] duration metric: took 16.233µs to acquireMachinesLock for "mount-start-1-839174"
	I1217 19:59:28.261033   26621 start.go:93] Provisioning new machine with config: &{Name:mount-start-1-839174 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v0.0.0 ClusterName:mount-start-1-839174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/tmp/TestMountStartserial1507744773/001:/minikube-host Mount9PVersion:9p2000.L MountGID:0 MountIP: MountMSize:6543 MountOptions:[] MountPort:46464 MountType:9p MountUID:0 BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:59:28.261072   26621 start.go:125] createHost starting for "" (driver="kvm2")
	I1217 19:59:28.262369   26621 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1217 19:59:28.262502   26621 start.go:159] libmachine.API.Create for "mount-start-1-839174" (driver="kvm2")
	I1217 19:59:28.262522   26621 client.go:173] LocalClient.Create starting
	I1217 19:59:28.262600   26621 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem
	I1217 19:59:28.262627   26621 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:28.262638   26621 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:28.262672   26621 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem
	I1217 19:59:28.262684   26621 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:28.262692   26621 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:28.262937   26621 main.go:143] libmachine: creating domain...
	I1217 19:59:28.262941   26621 main.go:143] libmachine: creating network...
	I1217 19:59:28.264179   26621 main.go:143] libmachine: found existing default network
	I1217 19:59:28.264338   26621 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 19:59:28.264789   26621 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c2e910}
	I1217 19:59:28.264860   26621 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-mount-start-1-839174</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 19:59:28.269611   26621 main.go:143] libmachine: creating private network mk-mount-start-1-839174 192.168.39.0/24...
	I1217 19:59:28.333115   26621 main.go:143] libmachine: private network mk-mount-start-1-839174 192.168.39.0/24 created
	I1217 19:59:28.333390   26621 main.go:143] libmachine: <network>
	  <name>mk-mount-start-1-839174</name>
	  <uuid>5d8086da-e50d-4db1-bd68-14411b6cc0ff</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:d8:43:4a'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 19:59:28.333411   26621 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174 ...
	I1217 19:59:28.333429   26621 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1217 19:59:28.333434   26621 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:59:28.333503   26621 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22186-3611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
	I1217 19:59:28.562102   26621 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/id_rsa...
	I1217 19:59:28.773277   26621 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/mount-start-1-839174.rawdisk...
	I1217 19:59:28.773310   26621 main.go:143] libmachine: Writing magic tar header
	I1217 19:59:28.773341   26621 main.go:143] libmachine: Writing SSH key tar header
	I1217 19:59:28.773413   26621 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174 ...
	I1217 19:59:28.773467   26621 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174
	I1217 19:59:28.773492   26621 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174 (perms=drwx------)
	I1217 19:59:28.773503   26621 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines
	I1217 19:59:28.773511   26621 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines (perms=drwxr-xr-x)
	I1217 19:59:28.773519   26621 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:59:28.773539   26621 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube (perms=drwxr-xr-x)
	I1217 19:59:28.773547   26621 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611
	I1217 19:59:28.773554   26621 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611 (perms=drwxrwxr-x)
	I1217 19:59:28.773562   26621 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 19:59:28.773568   26621 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 19:59:28.773574   26621 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 19:59:28.773580   26621 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 19:59:28.773587   26621 main.go:143] libmachine: checking permissions on dir: /home
	I1217 19:59:28.773593   26621 main.go:143] libmachine: skipping /home - not owner
	I1217 19:59:28.773596   26621 main.go:143] libmachine: defining domain...
	I1217 19:59:28.774759   26621 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>mount-start-1-839174</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/mount-start-1-839174.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-mount-start-1-839174'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 19:59:28.779491   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:c0:35:8a in network default
	I1217 19:59:28.779997   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:28.780005   26621 main.go:143] libmachine: starting domain...
	I1217 19:59:28.780008   26621 main.go:143] libmachine: ensuring networks are active...
	I1217 19:59:28.780640   26621 main.go:143] libmachine: Ensuring network default is active
	I1217 19:59:28.780974   26621 main.go:143] libmachine: Ensuring network mk-mount-start-1-839174 is active
	I1217 19:59:28.781483   26621 main.go:143] libmachine: getting domain XML...
	I1217 19:59:28.782558   26621 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>mount-start-1-839174</name>
	  <uuid>89ae1adf-4565-4245-8f7c-ae565ff60660</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/mount-start-1-839174.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:70:c0:0a'/>
	      <source network='mk-mount-start-1-839174'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:c0:35:8a'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 19:59:29.997739   26621 main.go:143] libmachine: waiting for domain to start...
	I1217 19:59:29.998993   26621 main.go:143] libmachine: domain is now running
	I1217 19:59:29.999016   26621 main.go:143] libmachine: waiting for IP...
	I1217 19:59:29.999769   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:30.000254   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:30.000262   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:30.000539   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:30.000575   26621 retry.go:31] will retry after 293.558708ms: waiting for domain to come up
	I1217 19:59:30.296065   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:30.296615   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:30.296625   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:30.296883   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:30.296908   26621 retry.go:31] will retry after 318.501766ms: waiting for domain to come up
	I1217 19:59:30.617351   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:30.617998   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:30.618005   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:30.618239   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:30.618263   26621 retry.go:31] will retry after 448.876412ms: waiting for domain to come up
	I1217 19:59:31.069219   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:31.069874   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:31.069883   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:31.070162   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:31.070194   26621 retry.go:31] will retry after 546.119561ms: waiting for domain to come up
	I1217 19:59:31.617722   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:31.618212   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:31.618218   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:31.618444   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:31.618466   26621 retry.go:31] will retry after 587.879096ms: waiting for domain to come up
	I1217 19:59:32.208072   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:32.208634   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:32.208644   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:32.208904   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:32.208931   26621 retry.go:31] will retry after 915.043299ms: waiting for domain to come up
	I1217 19:59:33.125954   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:33.126450   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:33.126456   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:33.126736   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:33.126761   26621 retry.go:31] will retry after 1.074595877s: waiting for domain to come up
	I1217 19:59:34.203311   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:34.203945   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:34.203952   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:34.204242   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:34.204273   26621 retry.go:31] will retry after 1.038730893s: waiting for domain to come up
	I1217 19:59:35.244317   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:35.244810   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:35.244817   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:35.245050   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:35.245073   26621 retry.go:31] will retry after 1.546696345s: waiting for domain to come up
	I1217 19:59:36.793933   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:36.794642   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:36.794653   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:36.795011   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:36.795039   26621 retry.go:31] will retry after 2.220100792s: waiting for domain to come up
	I1217 19:59:39.016340   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:39.017032   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:39.017042   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:39.017324   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:39.017364   26621 retry.go:31] will retry after 2.818437291s: waiting for domain to come up
	I1217 19:59:41.839174   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:41.839716   26621 main.go:143] libmachine: no network interface addresses found for domain mount-start-1-839174 (source=lease)
	I1217 19:59:41.839724   26621 main.go:143] libmachine: trying to list again with source=arp
	I1217 19:59:41.839977   26621 main.go:143] libmachine: unable to find current IP address of domain mount-start-1-839174 in network mk-mount-start-1-839174 (interfaces detected: [])
	I1217 19:59:41.840000   26621 retry.go:31] will retry after 2.946727435s: waiting for domain to come up
	I1217 19:59:44.789112   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:44.789711   26621 main.go:143] libmachine: domain mount-start-1-839174 has current primary IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:44.789719   26621 main.go:143] libmachine: found domain IP: 192.168.39.194
	I1217 19:59:44.789724   26621 main.go:143] libmachine: reserving static IP address...
	I1217 19:59:44.790112   26621 main.go:143] libmachine: unable to find host DHCP lease matching {name: "mount-start-1-839174", mac: "52:54:00:70:c0:0a", ip: "192.168.39.194"} in network mk-mount-start-1-839174
	I1217 19:59:44.960396   26621 main.go:143] libmachine: reserved static IP address 192.168.39.194 for domain mount-start-1-839174
	I1217 19:59:44.960409   26621 main.go:143] libmachine: waiting for SSH...
	I1217 19:59:44.960415   26621 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 19:59:44.963544   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:44.964041   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:44.964078   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:44.964260   26621 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:44.964613   26621 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1217 19:59:44.964621   26621 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 19:59:45.077945   26621 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:59:45.078273   26621 main.go:143] libmachine: domain creation complete
	I1217 19:59:45.079846   26621 machine.go:94] provisionDockerMachine start ...
	I1217 19:59:45.082233   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.082661   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:45.082678   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.082829   26621 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:45.083012   26621 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1217 19:59:45.083016   26621 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:59:45.194896   26621 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 19:59:45.194912   26621 buildroot.go:166] provisioning hostname "mount-start-1-839174"
	I1217 19:59:45.197981   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.198398   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:45.198413   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.198613   26621 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:45.198833   26621 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1217 19:59:45.198839   26621 main.go:143] libmachine: About to run SSH command:
	sudo hostname mount-start-1-839174 && echo "mount-start-1-839174" | sudo tee /etc/hostname
	I1217 19:59:45.328086   26621 main.go:143] libmachine: SSH cmd err, output: <nil>: mount-start-1-839174
	
	I1217 19:59:45.331374   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.331850   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:45.331884   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.332099   26621 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:45.332353   26621 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1217 19:59:45.332371   26621 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smount-start-1-839174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 mount-start-1-839174/g' /etc/hosts;
				else 
					echo '127.0.1.1 mount-start-1-839174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:59:45.457894   26621 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:59:45.457913   26621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22186-3611/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-3611/.minikube}
	I1217 19:59:45.457949   26621 buildroot.go:174] setting up certificates
	I1217 19:59:45.457970   26621 provision.go:84] configureAuth start
	I1217 19:59:45.460889   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.461245   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:45.461272   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.463350   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.463627   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:45.463641   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.463739   26621 provision.go:143] copyHostCerts
	I1217 19:59:45.463799   26621 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem, removing ...
	I1217 19:59:45.463809   26621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem
	I1217 19:59:45.463869   26621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem (1082 bytes)
	I1217 19:59:45.463962   26621 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem, removing ...
	I1217 19:59:45.463966   26621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem
	I1217 19:59:45.463990   26621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem (1123 bytes)
	I1217 19:59:45.464056   26621 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem, removing ...
	I1217 19:59:45.464058   26621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem
	I1217 19:59:45.464079   26621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem (1679 bytes)
	I1217 19:59:45.464132   26621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem org=jenkins.mount-start-1-839174 san=[127.0.0.1 192.168.39.194 localhost minikube mount-start-1-839174]
	I1217 19:59:45.519441   26621 provision.go:177] copyRemoteCerts
	I1217 19:59:45.519488   26621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:59:45.521882   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.522175   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:45.522193   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.522393   26621 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/id_rsa Username:docker}
	I1217 19:59:45.610000   26621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1217 19:59:45.639216   26621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 19:59:45.668854   26621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:59:45.698061   26621 provision.go:87] duration metric: took 240.081194ms to configureAuth
	I1217 19:59:45.698092   26621 buildroot.go:189] setting minikube options for container-runtime
	I1217 19:59:45.698273   26621 config.go:182] Loaded profile config "mount-start-1-839174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1217 19:59:45.701065   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.701448   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:45.701463   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.701687   26621 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:45.701900   26621 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1217 19:59:45.701910   26621 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:59:45.945670   26621 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:59:45.945688   26621 machine.go:97] duration metric: took 865.8331ms to provisionDockerMachine
	I1217 19:59:45.945700   26621 client.go:176] duration metric: took 17.68317314s to LocalClient.Create
	I1217 19:59:45.945725   26621 start.go:167] duration metric: took 17.683221967s to libmachine.API.Create "mount-start-1-839174"
	I1217 19:59:45.945733   26621 start.go:293] postStartSetup for "mount-start-1-839174" (driver="kvm2")
	I1217 19:59:45.945744   26621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:59:45.945799   26621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:59:45.948635   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.949014   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:45.949029   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:45.949126   26621 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/id_rsa Username:docker}
	I1217 19:59:46.036107   26621 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:59:46.040984   26621 info.go:137] Remote host: Buildroot 2025.02
	I1217 19:59:46.040999   26621 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/addons for local assets ...
	I1217 19:59:46.041058   26621 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/files for local assets ...
	I1217 19:59:46.041120   26621 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem -> 75312.pem in /etc/ssl/certs
	I1217 19:59:46.041199   26621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 19:59:46.052443   26621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem --> /etc/ssl/certs/75312.pem (1708 bytes)
	I1217 19:59:46.080807   26621 start.go:296] duration metric: took 135.062614ms for postStartSetup
	I1217 19:59:46.083651   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.083966   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:46.083992   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.084191   26621 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/mount-start-1-839174/config.json ...
	I1217 19:59:46.084352   26621 start.go:128] duration metric: took 17.823273686s to createHost
	I1217 19:59:46.086281   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.086630   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:46.086645   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.086770   26621 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:46.086952   26621 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1217 19:59:46.086956   26621 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 19:59:46.200830   26621 main.go:143] libmachine: SSH cmd err, output: <nil>: 1766001586.166614774
	
	I1217 19:59:46.200841   26621 fix.go:216] guest clock: 1766001586.166614774
	I1217 19:59:46.200848   26621 fix.go:229] Guest: 2025-12-17 19:59:46.166614774 +0000 UTC Remote: 2025-12-17 19:59:46.084357881 +0000 UTC m=+17.918549205 (delta=82.256893ms)
	I1217 19:59:46.200872   26621 fix.go:200] guest clock delta is within tolerance: 82.256893ms
	I1217 19:59:46.200877   26621 start.go:83] releasing machines lock for "mount-start-1-839174", held for 17.939851722s
	I1217 19:59:46.203668   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.203972   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:46.203987   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.204542   26621 ssh_runner.go:195] Run: cat /version.json
	I1217 19:59:46.204603   26621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:59:46.207758   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.208093   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.208095   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:46.208117   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.208329   26621 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/id_rsa Username:docker}
	I1217 19:59:46.208652   26621 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:c0:0a", ip: ""} in network mk-mount-start-1-839174: {Iface:virbr1 ExpiryTime:2025-12-17 20:59:43 +0000 UTC Type:0 Mac:52:54:00:70:c0:0a Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:mount-start-1-839174 Clientid:01:52:54:00:70:c0:0a}
	I1217 19:59:46.208676   26621 main.go:143] libmachine: domain mount-start-1-839174 has defined IP address 192.168.39.194 and MAC address 52:54:00:70:c0:0a in network mk-mount-start-1-839174
	I1217 19:59:46.208855   26621 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/mount-start-1-839174/id_rsa Username:docker}
	I1217 19:59:46.312853   26621 ssh_runner.go:195] Run: systemctl --version
	I1217 19:59:46.319132   26621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:59:46.481381   26621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:59:46.488197   26621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:59:46.488273   26621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:59:46.508083   26621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:59:46.508095   26621 start.go:496] detecting cgroup driver to use...
	I1217 19:59:46.508161   26621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:59:46.526458   26621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:59:46.543209   26621 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:59:46.543264   26621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:59:46.560084   26621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:59:46.575414   26621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:59:46.717818   26621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:59:46.922447   26621 docker.go:234] disabling docker service ...
	I1217 19:59:46.922507   26621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:59:46.938253   26621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:59:46.953080   26621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:59:47.107574   26621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:59:47.251741   26621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:59:47.266911   26621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:59:47.288792   26621 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1217 19:59:47.288816   26621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1217 19:59:47.288853   26621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:47.300651   26621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 19:59:47.300704   26621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:47.312370   26621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:47.323806   26621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:47.335474   26621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:59:47.347734   26621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:59:47.357460   26621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 19:59:47.357508   26621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 19:59:47.377543   26621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:59:47.388497   26621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:47.523786   26621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:59:47.634281   26621 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:59:47.634346   26621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:59:47.639759   26621 start.go:564] Will wait 60s for crictl version
	I1217 19:59:47.639812   26621 ssh_runner.go:195] Run: which crictl
	I1217 19:59:47.643892   26621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 19:59:47.678578   26621 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 19:59:47.678688   26621 ssh_runner.go:195] Run: crio --version
	I1217 19:59:47.707297   26621 ssh_runner.go:195] Run: crio --version
	I1217 19:59:47.738308   26621 out.go:179] * Preparing CRI-O 1.29.1 ...
	I1217 19:59:47.739406   26621 out.go:179] * Creating mount /tmp/TestMountStartserial1507744773/001:/minikube-host ...
	I1217 19:59:47.740780   26621 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/mount-start-1-839174/.mount-process: {Name:mk3828ce77a19532cf15a9abc3bf909f5f99c6ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:47.741112   26621 ssh_runner.go:195] Run: rm -f paused
	I1217 19:59:47.751653   26621 out.go:179] * Done! minikube is ready without Kubernetes!
	I1217 19:59:47.754201   26621 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.496334611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766001589496312306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a5bd1b3-4dbe-4e38-9389-90cab62b1c85 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.497261740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b321a86-daa6-4920-87f8-64e35ca00e5c name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.497322887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b321a86-daa6-4920-87f8-64e35ca00e5c name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.497465658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3b321a86-daa6-4920-87f8-64e35ca00e5c name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.525822615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=674a4ed9-8622-40c6-b1d5-a697fa610d1a name=/runtime.v1.RuntimeService/Version
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.525904718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=674a4ed9-8622-40c6-b1d5-a697fa610d1a name=/runtime.v1.RuntimeService/Version
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.527235085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9abe7f4a-2267-4851-a929-b7edb5900821 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.527412948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766001589527337140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9abe7f4a-2267-4851-a929-b7edb5900821 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.528054323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71c8fd35-728c-484f-9acf-3da3ae2e1d2c name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.528127653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71c8fd35-728c-484f-9acf-3da3ae2e1d2c name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.528162254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=71c8fd35-728c-484f-9acf-3da3ae2e1d2c name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.555685999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af81e80d-69be-4906-859d-f290f8cc4b34 name=/runtime.v1.RuntimeService/Version
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.555802307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af81e80d-69be-4906-859d-f290f8cc4b34 name=/runtime.v1.RuntimeService/Version
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.557190687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a857905-a67c-4de5-9b5e-96450f5c30c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.557307257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766001589557289350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a857905-a67c-4de5-9b5e-96450f5c30c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.558334821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27dff176-8cb1-4053-925f-d2858a635ab0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.558683823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27dff176-8cb1-4053-925f-d2858a635ab0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.558756358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=27dff176-8cb1-4053-925f-d2858a635ab0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.585823366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c869f9b-437a-46f9-8e4c-da143433e908 name=/runtime.v1.RuntimeService/Version
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.585963671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c869f9b-437a-46f9-8e4c-da143433e908 name=/runtime.v1.RuntimeService/Version
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.587657317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31ed189c-35b4-4737-bccc-3492b66d6b88 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.587752005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766001589587734833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31ed189c-35b4-4737-bccc-3492b66d6b88 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.588561583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=746f8d76-5396-4a98-8e97-abdde7991026 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.588621705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=746f8d76-5396-4a98-8e97-abdde7991026 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 19:59:49 mount-start-1-839174 crio[808]: time="2025-12-17 19:59:49.588652978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=746f8d76-5396-4a98-8e97-abdde7991026 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[Dec17 19:59] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001636] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003693] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.152089] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082619] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.003234] 9pnet_fd: p9_fd_create_tcp (859): problem connecting socket to 192.168.39.1
	
	
	==> kernel <==
	 19:59:49 up 0 min,  0 users,  load average: 0.15, 0.03, 0.01
	Linux mount-start-1-839174 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p mount-start-1-839174 -n mount-start-1-839174
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p mount-start-1-839174 -n mount-start-1-839174: exit status 6 (187.946821ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 19:59:49.936066   26886 status.go:458] kubeconfig endpoint: get endpoint: "mount-start-1-839174" does not appear in /home/jenkins/minikube-integration/22186-3611/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "mount-start-1-839174" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMountStart/serial/VerifyMountFirst (1.19s)

                                                
                                    
x
+
TestPreload (138.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-900733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1217 20:13:16.195593    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:13:19.555500    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-900733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m25.830138249s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-900733 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-900733 image pull gcr.io/k8s-minikube/busybox: (3.795928244s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-900733
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-900733: (8.063062274s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-900733 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-900733 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (38.745871832s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-900733 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-17 20:15:15.806945751 +0000 UTC m=+3310.395788648
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-900733 -n test-preload-900733
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-900733 logs -n 25
E1217 20:15:16.485905    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-643742 ssh -n multinode-643742-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ ssh     │ multinode-643742 ssh -n multinode-643742 sudo cat /home/docker/cp-test_multinode-643742-m03_multinode-643742.txt                                          │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ cp      │ multinode-643742 cp multinode-643742-m03:/home/docker/cp-test.txt multinode-643742-m02:/home/docker/cp-test_multinode-643742-m03_multinode-643742-m02.txt │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ ssh     │ multinode-643742 ssh -n multinode-643742-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ ssh     │ multinode-643742 ssh -n multinode-643742-m02 sudo cat /home/docker/cp-test_multinode-643742-m03_multinode-643742-m02.txt                                  │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ node    │ multinode-643742 node stop m03                                                                                                                            │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ node    │ multinode-643742 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:03 UTC │
	│ node    │ list -p multinode-643742                                                                                                                                  │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ stop    │ -p multinode-643742                                                                                                                                       │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:05 UTC │
	│ start   │ -p multinode-643742 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:05 UTC │ 17 Dec 25 20:07 UTC │
	│ node    │ list -p multinode-643742                                                                                                                                  │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:07 UTC │                     │
	│ node    │ multinode-643742 node delete m03                                                                                                                          │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:07 UTC │ 17 Dec 25 20:07 UTC │
	│ stop    │ multinode-643742 stop                                                                                                                                     │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:07 UTC │ 17 Dec 25 20:10 UTC │
	│ start   │ -p multinode-643742 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:10 UTC │ 17 Dec 25 20:12 UTC │
	│ node    │ list -p multinode-643742                                                                                                                                  │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:12 UTC │                     │
	│ start   │ -p multinode-643742-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-643742-m02 │ jenkins │ v1.37.0 │ 17 Dec 25 20:12 UTC │                     │
	│ start   │ -p multinode-643742-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-643742-m03 │ jenkins │ v1.37.0 │ 17 Dec 25 20:12 UTC │ 17 Dec 25 20:12 UTC │
	│ node    │ add -p multinode-643742                                                                                                                                   │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:12 UTC │                     │
	│ delete  │ -p multinode-643742-m03                                                                                                                                   │ multinode-643742-m03 │ jenkins │ v1.37.0 │ 17 Dec 25 20:12 UTC │ 17 Dec 25 20:12 UTC │
	│ delete  │ -p multinode-643742                                                                                                                                       │ multinode-643742     │ jenkins │ v1.37.0 │ 17 Dec 25 20:12 UTC │ 17 Dec 25 20:12 UTC │
	│ start   │ -p test-preload-900733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-900733  │ jenkins │ v1.37.0 │ 17 Dec 25 20:12 UTC │ 17 Dec 25 20:14 UTC │
	│ image   │ test-preload-900733 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-900733  │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │ 17 Dec 25 20:14 UTC │
	│ stop    │ -p test-preload-900733                                                                                                                                    │ test-preload-900733  │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │ 17 Dec 25 20:14 UTC │
	│ start   │ -p test-preload-900733 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-900733  │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │ 17 Dec 25 20:15 UTC │
	│ image   │ test-preload-900733 image list                                                                                                                            │ test-preload-900733  │ jenkins │ v1.37.0 │ 17 Dec 25 20:15 UTC │ 17 Dec 25 20:15 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:14:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:14:36.931332   32779 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:36.931636   32779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:36.931646   32779 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:36.931651   32779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:36.931844   32779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:14:36.932245   32779 out.go:368] Setting JSON to false
	I1217 20:14:36.933102   32779 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3416,"bootTime":1765999061,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:14:36.933150   32779 start.go:143] virtualization: kvm guest
	I1217 20:14:36.935115   32779 out.go:179] * [test-preload-900733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:14:36.936104   32779 notify.go:221] Checking for updates...
	I1217 20:14:36.936119   32779 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:14:36.937118   32779 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:14:36.938174   32779 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 20:14:36.939346   32779 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 20:14:36.940428   32779 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:14:36.941292   32779 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:14:36.942501   32779 config.go:182] Loaded profile config "test-preload-900733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:36.942918   32779 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:14:36.979664   32779 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 20:14:36.980724   32779 start.go:309] selected driver: kvm2
	I1217 20:14:36.980737   32779 start.go:927] validating driver "kvm2" against &{Name:test-preload-900733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.3 ClusterName:test-preload-900733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:14:36.980836   32779 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:14:36.981714   32779 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:14:36.981743   32779 cni.go:84] Creating CNI manager for ""
	I1217 20:14:36.981793   32779 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 20:14:36.981834   32779 start.go:353] cluster config:
	{Name:test-preload-900733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:test-preload-900733 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:14:36.981908   32779 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:14:36.983146   32779 out.go:179] * Starting "test-preload-900733" primary control-plane node in "test-preload-900733" cluster
	I1217 20:14:36.984115   32779 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:14:36.984139   32779 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:14:36.984145   32779 cache.go:65] Caching tarball of preloaded images
	I1217 20:14:36.984207   32779 preload.go:238] Found /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:14:36.984217   32779 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:14:36.984297   32779 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/config.json ...
	I1217 20:14:36.984471   32779 start.go:360] acquireMachinesLock for test-preload-900733: {Name:mk03890d04d41d66ccbc23571d0f065ba20ffda0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 20:14:36.984507   32779 start.go:364] duration metric: took 20.226µs to acquireMachinesLock for "test-preload-900733"
	I1217 20:14:36.984520   32779 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:14:36.984536   32779 fix.go:54] fixHost starting: 
	I1217 20:14:36.986283   32779 fix.go:112] recreateIfNeeded on test-preload-900733: state=Stopped err=<nil>
	W1217 20:14:36.986307   32779 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:14:36.987662   32779 out.go:252] * Restarting existing kvm2 VM for "test-preload-900733" ...
	I1217 20:14:36.987697   32779 main.go:143] libmachine: starting domain...
	I1217 20:14:36.987707   32779 main.go:143] libmachine: ensuring networks are active...
	I1217 20:14:36.988426   32779 main.go:143] libmachine: Ensuring network default is active
	I1217 20:14:36.988880   32779 main.go:143] libmachine: Ensuring network mk-test-preload-900733 is active
	I1217 20:14:36.989365   32779 main.go:143] libmachine: getting domain XML...
	I1217 20:14:36.990628   32779 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-900733</name>
	  <uuid>80cbdbb3-952c-4bfc-aed3-ce535da4852e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/test-preload-900733/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/test-preload-900733/test-preload-900733.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:82:8b:7b'/>
	      <source network='mk-test-preload-900733'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:be:9a:e2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 20:14:38.219021   32779 main.go:143] libmachine: waiting for domain to start...
	I1217 20:14:38.220394   32779 main.go:143] libmachine: domain is now running
	I1217 20:14:38.220415   32779 main.go:143] libmachine: waiting for IP...
	I1217 20:14:38.221114   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:38.221740   32779 main.go:143] libmachine: domain test-preload-900733 has current primary IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:38.221755   32779 main.go:143] libmachine: found domain IP: 192.168.39.26
	I1217 20:14:38.221761   32779 main.go:143] libmachine: reserving static IP address...
	I1217 20:14:38.222150   32779 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-900733", mac: "52:54:00:82:8b:7b", ip: "192.168.39.26"} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:13:14 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:38.222185   32779 main.go:143] libmachine: skip adding static IP to network mk-test-preload-900733 - found existing host DHCP lease matching {name: "test-preload-900733", mac: "52:54:00:82:8b:7b", ip: "192.168.39.26"}
	I1217 20:14:38.222193   32779 main.go:143] libmachine: reserved static IP address 192.168.39.26 for domain test-preload-900733
	I1217 20:14:38.222198   32779 main.go:143] libmachine: waiting for SSH...
	I1217 20:14:38.222203   32779 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 20:14:38.224600   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:38.224987   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:13:14 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:38.225009   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:38.225194   32779 main.go:143] libmachine: Using SSH client type: native
	I1217 20:14:38.225449   32779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I1217 20:14:38.225463   32779 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 20:14:41.324767   32779 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.26:22: connect: no route to host
	I1217 20:14:47.404823   32779 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.26:22: connect: no route to host
	I1217 20:14:50.519944   32779 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:14:50.523292   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.523772   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:50.523801   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.524042   32779 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/config.json ...
	I1217 20:14:50.524315   32779 machine.go:94] provisionDockerMachine start ...
	I1217 20:14:50.526308   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.526738   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:50.526772   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.526950   32779 main.go:143] libmachine: Using SSH client type: native
	I1217 20:14:50.527181   32779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I1217 20:14:50.527194   32779 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:14:50.628028   32779 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 20:14:50.628056   32779 buildroot.go:166] provisioning hostname "test-preload-900733"
	I1217 20:14:50.630958   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.631328   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:50.631363   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.631512   32779 main.go:143] libmachine: Using SSH client type: native
	I1217 20:14:50.631716   32779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I1217 20:14:50.631728   32779 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-900733 && echo "test-preload-900733" | sudo tee /etc/hostname
	I1217 20:14:50.750151   32779 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-900733
	
	I1217 20:14:50.752956   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.753323   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:50.753348   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.753480   32779 main.go:143] libmachine: Using SSH client type: native
	I1217 20:14:50.753683   32779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I1217 20:14:50.753698   32779 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-900733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-900733/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-900733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:14:50.864068   32779 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:14:50.864096   32779 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22186-3611/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-3611/.minikube}
	I1217 20:14:50.864114   32779 buildroot.go:174] setting up certificates
	I1217 20:14:50.864123   32779 provision.go:84] configureAuth start
	I1217 20:14:50.867062   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.867466   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:50.867489   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.869903   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.870261   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:50.870281   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:50.870433   32779 provision.go:143] copyHostCerts
	I1217 20:14:50.870505   32779 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem, removing ...
	I1217 20:14:50.870538   32779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem
	I1217 20:14:50.870639   32779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem (1082 bytes)
	I1217 20:14:50.870751   32779 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem, removing ...
	I1217 20:14:50.870760   32779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem
	I1217 20:14:50.870790   32779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem (1123 bytes)
	I1217 20:14:50.870851   32779 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem, removing ...
	I1217 20:14:50.870858   32779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem
	I1217 20:14:50.870881   32779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem (1679 bytes)
	I1217 20:14:50.870930   32779 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem org=jenkins.test-preload-900733 san=[127.0.0.1 192.168.39.26 localhost minikube test-preload-900733]
	I1217 20:14:51.050893   32779 provision.go:177] copyRemoteCerts
	I1217 20:14:51.050982   32779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:14:51.053750   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.054117   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:51.054146   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.054303   32779 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/test-preload-900733/id_rsa Username:docker}
	I1217 20:14:51.135877   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:14:51.164153   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 20:14:51.192925   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:14:51.220627   32779 provision.go:87] duration metric: took 356.491915ms to configureAuth
	I1217 20:14:51.220652   32779 buildroot.go:189] setting minikube options for container-runtime
	I1217 20:14:51.220802   32779 config.go:182] Loaded profile config "test-preload-900733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:51.223708   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.224110   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:51.224136   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.224323   32779 main.go:143] libmachine: Using SSH client type: native
	I1217 20:14:51.224590   32779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I1217 20:14:51.224611   32779 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:14:51.490643   32779 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:14:51.490672   32779 machine.go:97] duration metric: took 966.340608ms to provisionDockerMachine
	I1217 20:14:51.490687   32779 start.go:293] postStartSetup for "test-preload-900733" (driver="kvm2")
	I1217 20:14:51.490699   32779 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:14:51.490760   32779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:14:51.493169   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.493547   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:51.493574   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.493735   32779 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/test-preload-900733/id_rsa Username:docker}
	I1217 20:14:51.584521   32779 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:14:51.589298   32779 info.go:137] Remote host: Buildroot 2025.02
	I1217 20:14:51.589321   32779 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/addons for local assets ...
	I1217 20:14:51.589387   32779 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/files for local assets ...
	I1217 20:14:51.589474   32779 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem -> 75312.pem in /etc/ssl/certs
	I1217 20:14:51.589587   32779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:14:51.600613   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem --> /etc/ssl/certs/75312.pem (1708 bytes)
	I1217 20:14:51.629147   32779 start.go:296] duration metric: took 138.44439ms for postStartSetup
	I1217 20:14:51.629183   32779 fix.go:56] duration metric: took 14.644656893s for fixHost
	I1217 20:14:51.631660   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.631991   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:51.632010   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.632186   32779 main.go:143] libmachine: Using SSH client type: native
	I1217 20:14:51.632373   32779 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I1217 20:14:51.632382   32779 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 20:14:51.733713   32779 main.go:143] libmachine: SSH cmd err, output: <nil>: 1766002491.692575088
	
	I1217 20:14:51.733740   32779 fix.go:216] guest clock: 1766002491.692575088
	I1217 20:14:51.733748   32779 fix.go:229] Guest: 2025-12-17 20:14:51.692575088 +0000 UTC Remote: 2025-12-17 20:14:51.629186924 +0000 UTC m=+14.743505260 (delta=63.388164ms)
	I1217 20:14:51.733766   32779 fix.go:200] guest clock delta is within tolerance: 63.388164ms
	I1217 20:14:51.733771   32779 start.go:83] releasing machines lock for "test-preload-900733", held for 14.749255922s
	I1217 20:14:51.736554   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.736936   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:51.736959   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.737513   32779 ssh_runner.go:195] Run: cat /version.json
	I1217 20:14:51.737572   32779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:14:51.740343   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.740778   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:51.740809   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.740819   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.740981   32779 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/test-preload-900733/id_rsa Username:docker}
	I1217 20:14:51.741396   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:51.741439   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:51.741702   32779 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/test-preload-900733/id_rsa Username:docker}
	I1217 20:14:51.817040   32779 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:51.844885   32779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:14:51.989849   32779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:14:51.996862   32779 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:14:51.996924   32779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:14:52.016000   32779 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:14:52.016017   32779 start.go:496] detecting cgroup driver to use...
	I1217 20:14:52.016071   32779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:14:52.034402   32779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:14:52.050185   32779 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:14:52.050243   32779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:14:52.066337   32779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:14:52.082143   32779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:14:52.223221   32779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:14:52.433164   32779 docker.go:234] disabling docker service ...
	I1217 20:14:52.433251   32779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:14:52.449361   32779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:14:52.463193   32779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:14:52.612143   32779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:14:52.749054   32779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:14:52.764748   32779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:14:52.786158   32779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:14:52.786232   32779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:14:52.797543   32779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:14:52.797623   32779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:14:52.809135   32779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:14:52.820550   32779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:14:52.832213   32779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:14:52.844488   32779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:14:52.855823   32779 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:14:52.874975   32779 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:14:52.886284   32779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:14:52.895935   32779 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 20:14:52.895974   32779 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 20:14:52.917483   32779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:14:52.928859   32779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:14:53.067006   32779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:14:53.177644   32779 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:14:53.177729   32779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:14:53.183017   32779 start.go:564] Will wait 60s for crictl version
	I1217 20:14:53.183066   32779 ssh_runner.go:195] Run: which crictl
	I1217 20:14:53.187008   32779 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 20:14:53.223522   32779 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 20:14:53.223633   32779 ssh_runner.go:195] Run: crio --version
	I1217 20:14:53.251865   32779 ssh_runner.go:195] Run: crio --version
	I1217 20:14:53.282583   32779 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 20:14:53.285842   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:53.286205   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:14:53.286240   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:14:53.286432   32779 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 20:14:53.290640   32779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:14:53.306042   32779 kubeadm.go:884] updating cluster {Name:test-preload-900733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.3 ClusterName:test-preload-900733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:14:53.306152   32779 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:14:53.306199   32779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:14:53.340240   32779 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1217 20:14:53.340332   32779 ssh_runner.go:195] Run: which lz4
	I1217 20:14:53.344621   32779 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 20:14:53.349222   32779 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 20:14:53.349250   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1217 20:14:54.544877   32779 crio.go:462] duration metric: took 1.200291428s to copy over tarball
	I1217 20:14:54.544954   32779 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 20:14:55.998088   32779 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.453107166s)
	I1217 20:14:55.998115   32779 crio.go:469] duration metric: took 1.453207454s to extract the tarball
	I1217 20:14:55.998124   32779 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 20:14:56.034614   32779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:14:56.077356   32779 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:14:56.077386   32779 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:14:56.077396   32779 kubeadm.go:935] updating node { 192.168.39.26 8443 v1.34.3 crio true true} ...
	I1217 20:14:56.077523   32779 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-900733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:test-preload-900733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:14:56.077615   32779 ssh_runner.go:195] Run: crio config
	I1217 20:14:56.121866   32779 cni.go:84] Creating CNI manager for ""
	I1217 20:14:56.121889   32779 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 20:14:56.121904   32779 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:14:56.121923   32779 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-900733 NodeName:test-preload-900733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:14:56.122029   32779 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-900733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.26"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:14:56.122088   32779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:14:56.134853   32779 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:14:56.134930   32779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:14:56.146438   32779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1217 20:14:56.166214   32779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:14:56.186189   32779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 20:14:56.206472   32779 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I1217 20:14:56.210805   32779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:14:56.225003   32779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:14:56.366927   32779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:14:56.405920   32779 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733 for IP: 192.168.39.26
	I1217 20:14:56.405949   32779 certs.go:195] generating shared ca certs ...
	I1217 20:14:56.405968   32779 certs.go:227] acquiring lock for ca certs: {Name:mka9d751f3e3cbcb654d1f1d24f2b10b27bc58a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:14:56.406155   32779 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key
	I1217 20:14:56.406217   32779 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key
	I1217 20:14:56.406234   32779 certs.go:257] generating profile certs ...
	I1217 20:14:56.406357   32779 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/client.key
	I1217 20:14:56.406448   32779 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/apiserver.key.38da2437
	I1217 20:14:56.406505   32779 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/proxy-client.key
	I1217 20:14:56.406710   32779 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531.pem (1338 bytes)
	W1217 20:14:56.406762   32779 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531_empty.pem, impossibly tiny 0 bytes
	I1217 20:14:56.406778   32779 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:14:56.406820   32779 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:14:56.406858   32779 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:14:56.406895   32779 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem (1679 bytes)
	I1217 20:14:56.406962   32779 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem (1708 bytes)
	I1217 20:14:56.407814   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:14:56.443062   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:14:56.474251   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:14:56.502573   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:14:56.531293   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 20:14:56.559827   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:14:56.589038   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:14:56.619122   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:14:56.648862   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:14:56.677674   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531.pem --> /usr/share/ca-certificates/7531.pem (1338 bytes)
	I1217 20:14:56.706290   32779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem --> /usr/share/ca-certificates/75312.pem (1708 bytes)
	I1217 20:14:56.734995   32779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:14:56.755189   32779 ssh_runner.go:195] Run: openssl version
	I1217 20:14:56.761657   32779 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:14:56.772605   32779 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:14:56.784340   32779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:14:56.789390   32779 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:14:56.789448   32779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:14:56.796449   32779 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:14:56.807424   32779 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:14:56.818699   32779 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7531.pem
	I1217 20:14:56.829910   32779 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7531.pem /etc/ssl/certs/7531.pem
	I1217 20:14:56.841479   32779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7531.pem
	I1217 20:14:56.846504   32779 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/7531.pem
	I1217 20:14:56.846568   32779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7531.pem
	I1217 20:14:56.853743   32779 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:14:56.864497   32779 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7531.pem /etc/ssl/certs/51391683.0
	I1217 20:14:56.875774   32779 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75312.pem
	I1217 20:14:56.887024   32779 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75312.pem /etc/ssl/certs/75312.pem
	I1217 20:14:56.898341   32779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75312.pem
	I1217 20:14:56.903341   32779 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/75312.pem
	I1217 20:14:56.903388   32779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75312.pem
	I1217 20:14:56.910750   32779 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:14:56.921832   32779 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/75312.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:14:56.932958   32779 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:14:56.937912   32779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:14:56.944938   32779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:14:56.952018   32779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:14:56.958907   32779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:14:56.965804   32779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:14:56.972425   32779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:14:56.979095   32779 kubeadm.go:401] StartCluster: {Name:test-preload-900733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.3 ClusterName:test-preload-900733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:14:56.979161   32779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:56.979192   32779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:57.017673   32779 cri.go:89] found id: ""
	I1217 20:14:57.017751   32779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:14:57.030156   32779 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:14:57.030177   32779 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:14:57.030222   32779 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:14:57.042070   32779 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:14:57.042512   32779 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-900733" does not appear in /home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 20:14:57.042661   32779 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-3611/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-900733" cluster setting kubeconfig missing "test-preload-900733" context setting]
	I1217 20:14:57.042936   32779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/kubeconfig: {Name:mk319ed0207c46a4a2ae4d9b320056846508447c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:14:57.043394   32779 kapi.go:59] client config for test-preload-900733: &rest.Config{Host:"https://192.168.39.26:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/client.key", CAFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:14:57.043795   32779 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:14:57.043809   32779 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:14:57.043813   32779 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:14:57.043817   32779 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:14:57.043821   32779 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:14:57.044086   32779 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:14:57.055465   32779 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.26
	I1217 20:14:57.055501   32779 kubeadm.go:1161] stopping kube-system containers ...
	I1217 20:14:57.055517   32779 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 20:14:57.055593   32779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:57.088902   32779 cri.go:89] found id: ""
	I1217 20:14:57.088976   32779 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 20:14:57.111613   32779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:14:57.123713   32779 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:14:57.123736   32779 kubeadm.go:158] found existing configuration files:
	
	I1217 20:14:57.123773   32779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:14:57.134319   32779 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:14:57.134378   32779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:14:57.145159   32779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:14:57.155197   32779 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:14:57.155244   32779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:14:57.165837   32779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:14:57.176880   32779 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:14:57.176950   32779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:14:57.187870   32779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:14:57.197937   32779 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:14:57.197990   32779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:14:57.209001   32779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:14:57.220195   32779 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:14:57.273720   32779 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:14:58.318669   32779 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.044908708s)
	I1217 20:14:58.318772   32779 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:14:58.550630   32779 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:14:58.633402   32779 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:14:58.700172   32779 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:14:58.700268   32779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:14:59.200667   32779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:14:59.700796   32779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:15:00.201367   32779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:15:00.700797   32779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:15:00.730376   32779 api_server.go:72] duration metric: took 2.030227648s to wait for apiserver process to appear ...
	I1217 20:15:00.730402   32779 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:15:00.730425   32779 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I1217 20:15:00.730978   32779 api_server.go:269] stopped: https://192.168.39.26:8443/healthz: Get "https://192.168.39.26:8443/healthz": dial tcp 192.168.39.26:8443: connect: connection refused
	I1217 20:15:01.230672   32779 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I1217 20:15:03.360936   32779 api_server.go:279] https://192.168.39.26:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:15:03.360969   32779 api_server.go:103] status: https://192.168.39.26:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:15:03.360987   32779 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I1217 20:15:03.376134   32779 api_server.go:279] https://192.168.39.26:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:15:03.376176   32779 api_server.go:103] status: https://192.168.39.26:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:15:03.730666   32779 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I1217 20:15:03.739886   32779 api_server.go:279] https://192.168.39.26:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:15:03.739916   32779 api_server.go:103] status: https://192.168.39.26:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:15:04.230552   32779 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I1217 20:15:04.237417   32779 api_server.go:279] https://192.168.39.26:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:15:04.237440   32779 api_server.go:103] status: https://192.168.39.26:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:15:04.731136   32779 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I1217 20:15:04.735657   32779 api_server.go:279] https://192.168.39.26:8443/healthz returned 200:
	ok
	I1217 20:15:04.742123   32779 api_server.go:141] control plane version: v1.34.3
	I1217 20:15:04.742147   32779 api_server.go:131] duration metric: took 4.011738583s to wait for apiserver health ...
	I1217 20:15:04.742156   32779 cni.go:84] Creating CNI manager for ""
	I1217 20:15:04.742163   32779 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 20:15:04.743660   32779 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 20:15:04.744784   32779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 20:15:04.761624   32779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 20:15:04.783085   32779 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:15:04.790614   32779 system_pods.go:59] 7 kube-system pods found
	I1217 20:15:04.790647   32779 system_pods.go:61] "coredns-66bc5c9577-dwfn6" [e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:15:04.790658   32779 system_pods.go:61] "etcd-test-preload-900733" [2894a2ba-c2d3-4ce9-bebb-41d8b7c81a7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:15:04.790670   32779 system_pods.go:61] "kube-apiserver-test-preload-900733" [21055375-7db0-43d8-b2ee-8f5f4a87f5b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:15:04.790684   32779 system_pods.go:61] "kube-controller-manager-test-preload-900733" [461e670e-a365-4297-abcf-1652057897a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:15:04.790694   32779 system_pods.go:61] "kube-proxy-jdcq4" [5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 20:15:04.790713   32779 system_pods.go:61] "kube-scheduler-test-preload-900733" [0a9eacc4-d011-456e-8b43-70398a4f7429] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:15:04.790723   32779 system_pods.go:61] "storage-provisioner" [c8a6b0f7-1db5-46fb-88d7-ca961858efff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:15:04.790732   32779 system_pods.go:74] duration metric: took 7.627904ms to wait for pod list to return data ...
	I1217 20:15:04.790746   32779 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:15:04.796058   32779 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 20:15:04.796086   32779 node_conditions.go:123] node cpu capacity is 2
	I1217 20:15:04.796100   32779 node_conditions.go:105] duration metric: took 5.349051ms to run NodePressure ...
	I1217 20:15:04.796158   32779 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:15:05.062818   32779 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1217 20:15:05.071466   32779 kubeadm.go:744] kubelet initialised
	I1217 20:15:05.071488   32779 kubeadm.go:745] duration metric: took 8.645777ms waiting for restarted kubelet to initialise ...
	I1217 20:15:05.071502   32779 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:15:05.087718   32779 ops.go:34] apiserver oom_adj: -16
	I1217 20:15:05.087740   32779 kubeadm.go:602] duration metric: took 8.057556539s to restartPrimaryControlPlane
	I1217 20:15:05.087748   32779 kubeadm.go:403] duration metric: took 8.108659299s to StartCluster
	I1217 20:15:05.087762   32779 settings.go:142] acquiring lock: {Name:mke3c622f98fffe95e3e848232032c1bad05dc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:15:05.087844   32779 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 20:15:05.088551   32779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/kubeconfig: {Name:mk319ed0207c46a4a2ae4d9b320056846508447c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:15:05.088764   32779 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:15:05.088849   32779 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:15:05.088926   32779 addons.go:70] Setting storage-provisioner=true in profile "test-preload-900733"
	I1217 20:15:05.088940   32779 addons.go:239] Setting addon storage-provisioner=true in "test-preload-900733"
	W1217 20:15:05.088947   32779 addons.go:248] addon storage-provisioner should already be in state true
	I1217 20:15:05.088957   32779 config.go:182] Loaded profile config "test-preload-900733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:15:05.088978   32779 host.go:66] Checking if "test-preload-900733" exists ...
	I1217 20:15:05.088981   32779 addons.go:70] Setting default-storageclass=true in profile "test-preload-900733"
	I1217 20:15:05.089012   32779 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-900733"
	I1217 20:15:05.090233   32779 out.go:179] * Verifying Kubernetes components...
	I1217 20:15:05.091375   32779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:15:05.091405   32779 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:15:05.091479   32779 kapi.go:59] client config for test-preload-900733: &rest.Config{Host:"https://192.168.39.26:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/client.key", CAFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:15:05.091777   32779 addons.go:239] Setting addon default-storageclass=true in "test-preload-900733"
	W1217 20:15:05.091792   32779 addons.go:248] addon default-storageclass should already be in state true
	I1217 20:15:05.091815   32779 host.go:66] Checking if "test-preload-900733" exists ...
	I1217 20:15:05.092581   32779 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:15:05.092598   32779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:15:05.093554   32779 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:15:05.093569   32779 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:15:05.095715   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:15:05.096139   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:15:05.096169   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:15:05.096334   32779 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/test-preload-900733/id_rsa Username:docker}
	I1217 20:15:05.096683   32779 main.go:143] libmachine: domain test-preload-900733 has defined MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:15:05.097134   32779 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8b:7b", ip: ""} in network mk-test-preload-900733: {Iface:virbr1 ExpiryTime:2025-12-17 21:14:48 +0000 UTC Type:0 Mac:52:54:00:82:8b:7b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:test-preload-900733 Clientid:01:52:54:00:82:8b:7b}
	I1217 20:15:05.097163   32779 main.go:143] libmachine: domain test-preload-900733 has defined IP address 192.168.39.26 and MAC address 52:54:00:82:8b:7b in network mk-test-preload-900733
	I1217 20:15:05.097325   32779 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/test-preload-900733/id_rsa Username:docker}
	I1217 20:15:05.285245   32779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:15:05.305343   32779 node_ready.go:35] waiting up to 6m0s for node "test-preload-900733" to be "Ready" ...
	I1217 20:15:05.385103   32779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:15:05.391296   32779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:15:06.120705   32779 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:15:06.121930   32779 addons.go:530] duration metric: took 1.033084112s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1217 20:15:07.309304   32779 node_ready.go:57] node "test-preload-900733" has "Ready":"False" status (will retry)
	W1217 20:15:09.310249   32779 node_ready.go:57] node "test-preload-900733" has "Ready":"False" status (will retry)
	W1217 20:15:11.810722   32779 node_ready.go:57] node "test-preload-900733" has "Ready":"False" status (will retry)
	I1217 20:15:13.809216   32779 node_ready.go:49] node "test-preload-900733" is "Ready"
	I1217 20:15:13.809251   32779 node_ready.go:38] duration metric: took 8.503870551s for node "test-preload-900733" to be "Ready" ...
	I1217 20:15:13.809270   32779 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:15:13.809332   32779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:15:13.830955   32779 api_server.go:72] duration metric: took 8.742161474s to wait for apiserver process to appear ...
	I1217 20:15:13.830978   32779 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:15:13.830992   32779 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I1217 20:15:13.835474   32779 api_server.go:279] https://192.168.39.26:8443/healthz returned 200:
	ok
	I1217 20:15:13.836381   32779 api_server.go:141] control plane version: v1.34.3
	I1217 20:15:13.836400   32779 api_server.go:131] duration metric: took 5.416654ms to wait for apiserver health ...
	I1217 20:15:13.836410   32779 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:15:13.840833   32779 system_pods.go:59] 7 kube-system pods found
	I1217 20:15:13.840866   32779 system_pods.go:61] "coredns-66bc5c9577-dwfn6" [e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da] Running
	I1217 20:15:13.840875   32779 system_pods.go:61] "etcd-test-preload-900733" [2894a2ba-c2d3-4ce9-bebb-41d8b7c81a7e] Running
	I1217 20:15:13.840881   32779 system_pods.go:61] "kube-apiserver-test-preload-900733" [21055375-7db0-43d8-b2ee-8f5f4a87f5b2] Running
	I1217 20:15:13.840893   32779 system_pods.go:61] "kube-controller-manager-test-preload-900733" [461e670e-a365-4297-abcf-1652057897a2] Running
	I1217 20:15:13.840899   32779 system_pods.go:61] "kube-proxy-jdcq4" [5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865] Running
	I1217 20:15:13.840905   32779 system_pods.go:61] "kube-scheduler-test-preload-900733" [0a9eacc4-d011-456e-8b43-70398a4f7429] Running
	I1217 20:15:13.840918   32779 system_pods.go:61] "storage-provisioner" [c8a6b0f7-1db5-46fb-88d7-ca961858efff] Running
	I1217 20:15:13.840925   32779 system_pods.go:74] duration metric: took 4.507526ms to wait for pod list to return data ...
	I1217 20:15:13.840944   32779 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:15:13.845058   32779 default_sa.go:45] found service account: "default"
	I1217 20:15:13.845078   32779 default_sa.go:55] duration metric: took 4.123865ms for default service account to be created ...
	I1217 20:15:13.845085   32779 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:15:13.940171   32779 system_pods.go:86] 7 kube-system pods found
	I1217 20:15:13.940198   32779 system_pods.go:89] "coredns-66bc5c9577-dwfn6" [e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da] Running
	I1217 20:15:13.940205   32779 system_pods.go:89] "etcd-test-preload-900733" [2894a2ba-c2d3-4ce9-bebb-41d8b7c81a7e] Running
	I1217 20:15:13.940208   32779 system_pods.go:89] "kube-apiserver-test-preload-900733" [21055375-7db0-43d8-b2ee-8f5f4a87f5b2] Running
	I1217 20:15:13.940212   32779 system_pods.go:89] "kube-controller-manager-test-preload-900733" [461e670e-a365-4297-abcf-1652057897a2] Running
	I1217 20:15:13.940215   32779 system_pods.go:89] "kube-proxy-jdcq4" [5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865] Running
	I1217 20:15:13.940218   32779 system_pods.go:89] "kube-scheduler-test-preload-900733" [0a9eacc4-d011-456e-8b43-70398a4f7429] Running
	I1217 20:15:13.940221   32779 system_pods.go:89] "storage-provisioner" [c8a6b0f7-1db5-46fb-88d7-ca961858efff] Running
	I1217 20:15:13.940229   32779 system_pods.go:126] duration metric: took 95.13941ms to wait for k8s-apps to be running ...
	I1217 20:15:13.940236   32779 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:15:13.940276   32779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:15:13.957289   32779 system_svc.go:56] duration metric: took 17.043402ms WaitForService to wait for kubelet
	I1217 20:15:13.957319   32779 kubeadm.go:587] duration metric: took 8.868529799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:15:13.957342   32779 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:15:13.960965   32779 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 20:15:13.960984   32779 node_conditions.go:123] node cpu capacity is 2
	I1217 20:15:13.960999   32779 node_conditions.go:105] duration metric: took 3.651526ms to run NodePressure ...
	I1217 20:15:13.961009   32779 start.go:242] waiting for startup goroutines ...
	I1217 20:15:13.961019   32779 start.go:247] waiting for cluster config update ...
	I1217 20:15:13.961028   32779 start.go:256] writing updated cluster config ...
	I1217 20:15:13.961285   32779 ssh_runner.go:195] Run: rm -f paused
	I1217 20:15:13.966695   32779 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:15:13.967112   32779 kapi.go:59] client config for test-preload-900733: &rest.Config{Host:"https://192.168.39.26:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/test-preload-900733/client.key", CAFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:15:13.969689   32779 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dwfn6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:13.973855   32779 pod_ready.go:94] pod "coredns-66bc5c9577-dwfn6" is "Ready"
	I1217 20:15:13.973871   32779 pod_ready.go:86] duration metric: took 4.164111ms for pod "coredns-66bc5c9577-dwfn6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:13.975645   32779 pod_ready.go:83] waiting for pod "etcd-test-preload-900733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:13.979611   32779 pod_ready.go:94] pod "etcd-test-preload-900733" is "Ready"
	I1217 20:15:13.979634   32779 pod_ready.go:86] duration metric: took 3.972398ms for pod "etcd-test-preload-900733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:13.981476   32779 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-900733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:13.985678   32779 pod_ready.go:94] pod "kube-apiserver-test-preload-900733" is "Ready"
	I1217 20:15:13.985701   32779 pod_ready.go:86] duration metric: took 4.20944ms for pod "kube-apiserver-test-preload-900733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:13.987249   32779 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-900733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:14.371290   32779 pod_ready.go:94] pod "kube-controller-manager-test-preload-900733" is "Ready"
	I1217 20:15:14.371321   32779 pod_ready.go:86] duration metric: took 384.049616ms for pod "kube-controller-manager-test-preload-900733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:14.570477   32779 pod_ready.go:83] waiting for pod "kube-proxy-jdcq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:14.971713   32779 pod_ready.go:94] pod "kube-proxy-jdcq4" is "Ready"
	I1217 20:15:14.971736   32779 pod_ready.go:86] duration metric: took 401.238429ms for pod "kube-proxy-jdcq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:15.172006   32779 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-900733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:15.570781   32779 pod_ready.go:94] pod "kube-scheduler-test-preload-900733" is "Ready"
	I1217 20:15:15.570806   32779 pod_ready.go:86] duration metric: took 398.776968ms for pod "kube-scheduler-test-preload-900733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:15:15.570816   32779 pod_ready.go:40] duration metric: took 1.604092488s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:15:15.612069   32779 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:15:15.613976   32779 out.go:179] * Done! kubectl is now configured to use "test-preload-900733" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.325002369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766002516324979652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddefeec8-1375-4c75-9c67-c0bdaa29cd39 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.325974639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4dc8161-aaf9-4267-aa9d-df8cd5d0645d name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.326068027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4dc8161-aaf9-4267-aa9d-df8cd5d0645d name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.326245013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db650e3df50afe6a0f38365d7cf2c40063fe5043732a35fab9f6fcdc3241b639,PodSandboxId:62174c3609b795e6f9379f49c8c991708675d114ae5da1dbeef1c27cb6878c17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766002511752912754,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dwfn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0cd474315722139cb9399c7ccbb2c3da3afab64a7f738376b698b2c3f7ad470,PodSandboxId:046efb8fc6c430e33d281bae652c41badabfe2a22d0d51115becf11db468ef33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766002504076258865,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdcq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534bcc85d5187d0325533972568217d98e38929894e24d13e9d4fc1c8559864e,PodSandboxId:02baa49adb39341d3fcd142fd661d9cdf72be244fa2bc2224ec17f3085223949,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766002504079253712,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a6b0f7-1db5-46fb-88d7-ca961858efff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18db88cdfe44d83210edaba78b0d39dc8d9a22e48e10ca0ac09571a00029ee6f,PodSandboxId:58de9ae8986b32feed1bc9622f3cf4e96e89c158c33c7dc775d37d0db877a401,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766002500588942667,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 407abbe38798ca9061639a4524b64a05,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647f17c762ef6d9dff9824a263c655d00b5a40d72ad232143a36bac0c4fa01ae,PodSandboxId:5391c6b2c756fba826794967b2c72c68f014cbd80a7c5a6e44010e764e77c544,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,Crea
tedAt:1766002500539619612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a2bff5c7de66471ed0c6b8d0d01c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b77111321b0d3baba289b73cbcde379a7e020013a76f08b4123b144b8a425f,PodSandboxId:1a4ea243d8441cb22c89945324afcd865b1e7db208c3838758670f7ca132808c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766002500524469779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ff6a1d3d24c29bb7add499f54de8b0,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a143eb28dd7e0c1d9b3e41145f6f4b26b348cb749883188270bb0d98bdd24c2,PodSandboxId:66ccc3da74305f6d7fe127ff4a1818c11d54ae2ad35aac80f22d7ebeebb74d9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766002500457162435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b42b37b4479e90f23f89e336d705e3,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4dc8161-aaf9-4267-aa9d-df8cd5d0645d name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.358473888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b44b7658-d8cc-4460-b843-c8e2837e0b37 name=/runtime.v1.RuntimeService/Version
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.358640093Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b44b7658-d8cc-4460-b843-c8e2837e0b37 name=/runtime.v1.RuntimeService/Version
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.360018522Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6efc6cf-45ca-48a0-b84a-f665944b4296 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.360520636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766002516360494270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6efc6cf-45ca-48a0-b84a-f665944b4296 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.361595491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19781160-3ede-4b69-a1e2-5148bdffe9b3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.361658886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19781160-3ede-4b69-a1e2-5148bdffe9b3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.361884757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db650e3df50afe6a0f38365d7cf2c40063fe5043732a35fab9f6fcdc3241b639,PodSandboxId:62174c3609b795e6f9379f49c8c991708675d114ae5da1dbeef1c27cb6878c17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766002511752912754,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dwfn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0cd474315722139cb9399c7ccbb2c3da3afab64a7f738376b698b2c3f7ad470,PodSandboxId:046efb8fc6c430e33d281bae652c41badabfe2a22d0d51115becf11db468ef33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766002504076258865,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdcq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534bcc85d5187d0325533972568217d98e38929894e24d13e9d4fc1c8559864e,PodSandboxId:02baa49adb39341d3fcd142fd661d9cdf72be244fa2bc2224ec17f3085223949,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766002504079253712,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a6b0f7-1db5-46fb-88d7-ca961858efff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18db88cdfe44d83210edaba78b0d39dc8d9a22e48e10ca0ac09571a00029ee6f,PodSandboxId:58de9ae8986b32feed1bc9622f3cf4e96e89c158c33c7dc775d37d0db877a401,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766002500588942667,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 407abbe38798ca9061639a4524b64a05,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647f17c762ef6d9dff9824a263c655d00b5a40d72ad232143a36bac0c4fa01ae,PodSandboxId:5391c6b2c756fba826794967b2c72c68f014cbd80a7c5a6e44010e764e77c544,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,Crea
tedAt:1766002500539619612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a2bff5c7de66471ed0c6b8d0d01c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b77111321b0d3baba289b73cbcde379a7e020013a76f08b4123b144b8a425f,PodSandboxId:1a4ea243d8441cb22c89945324afcd865b1e7db208c3838758670f7ca132808c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766002500524469779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ff6a1d3d24c29bb7add499f54de8b0,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a143eb28dd7e0c1d9b3e41145f6f4b26b348cb749883188270bb0d98bdd24c2,PodSandboxId:66ccc3da74305f6d7fe127ff4a1818c11d54ae2ad35aac80f22d7ebeebb74d9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766002500457162435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b42b37b4479e90f23f89e336d705e3,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19781160-3ede-4b69-a1e2-5148bdffe9b3 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.394744475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e41e26b6-6787-48cf-ab6f-1cc735699fb3 name=/runtime.v1.RuntimeService/Version
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.395077930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e41e26b6-6787-48cf-ab6f-1cc735699fb3 name=/runtime.v1.RuntimeService/Version
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.396651891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a57bb17-4913-47c2-95d0-6d4dfd340958 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.397154105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766002516397130439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a57bb17-4913-47c2-95d0-6d4dfd340958 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.398188995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f13f02e-ced8-4354-8df2-c6e983402b0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.398467246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f13f02e-ced8-4354-8df2-c6e983402b0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.398674325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db650e3df50afe6a0f38365d7cf2c40063fe5043732a35fab9f6fcdc3241b639,PodSandboxId:62174c3609b795e6f9379f49c8c991708675d114ae5da1dbeef1c27cb6878c17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766002511752912754,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dwfn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0cd474315722139cb9399c7ccbb2c3da3afab64a7f738376b698b2c3f7ad470,PodSandboxId:046efb8fc6c430e33d281bae652c41badabfe2a22d0d51115becf11db468ef33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766002504076258865,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdcq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534bcc85d5187d0325533972568217d98e38929894e24d13e9d4fc1c8559864e,PodSandboxId:02baa49adb39341d3fcd142fd661d9cdf72be244fa2bc2224ec17f3085223949,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766002504079253712,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a6b0f7-1db5-46fb-88d7-ca961858efff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18db88cdfe44d83210edaba78b0d39dc8d9a22e48e10ca0ac09571a00029ee6f,PodSandboxId:58de9ae8986b32feed1bc9622f3cf4e96e89c158c33c7dc775d37d0db877a401,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766002500588942667,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 407abbe38798ca9061639a4524b64a05,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647f17c762ef6d9dff9824a263c655d00b5a40d72ad232143a36bac0c4fa01ae,PodSandboxId:5391c6b2c756fba826794967b2c72c68f014cbd80a7c5a6e44010e764e77c544,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,Crea
tedAt:1766002500539619612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a2bff5c7de66471ed0c6b8d0d01c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b77111321b0d3baba289b73cbcde379a7e020013a76f08b4123b144b8a425f,PodSandboxId:1a4ea243d8441cb22c89945324afcd865b1e7db208c3838758670f7ca132808c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766002500524469779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ff6a1d3d24c29bb7add499f54de8b0,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a143eb28dd7e0c1d9b3e41145f6f4b26b348cb749883188270bb0d98bdd24c2,PodSandboxId:66ccc3da74305f6d7fe127ff4a1818c11d54ae2ad35aac80f22d7ebeebb74d9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766002500457162435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b42b37b4479e90f23f89e336d705e3,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f13f02e-ced8-4354-8df2-c6e983402b0a name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.426951584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d55bee8-add6-43f0-80af-c14af5514eaf name=/runtime.v1.RuntimeService/Version
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.427044720Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d55bee8-add6-43f0-80af-c14af5514eaf name=/runtime.v1.RuntimeService/Version
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.428715481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12b20a7d-143c-4d41-8ad1-8b0549e4c17c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.429379422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766002516429354469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12b20a7d-143c-4d41-8ad1-8b0549e4c17c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.430568629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91307337-a5ad-47d0-bdad-7b33f049f221 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.430714587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91307337-a5ad-47d0-bdad-7b33f049f221 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:15:16 test-preload-900733 crio[836]: time="2025-12-17 20:15:16.430879570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db650e3df50afe6a0f38365d7cf2c40063fe5043732a35fab9f6fcdc3241b639,PodSandboxId:62174c3609b795e6f9379f49c8c991708675d114ae5da1dbeef1c27cb6878c17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766002511752912754,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dwfn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0cd474315722139cb9399c7ccbb2c3da3afab64a7f738376b698b2c3f7ad470,PodSandboxId:046efb8fc6c430e33d281bae652c41badabfe2a22d0d51115becf11db468ef33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766002504076258865,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdcq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534bcc85d5187d0325533972568217d98e38929894e24d13e9d4fc1c8559864e,PodSandboxId:02baa49adb39341d3fcd142fd661d9cdf72be244fa2bc2224ec17f3085223949,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766002504079253712,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a6b0f7-1db5-46fb-88d7-ca961858efff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18db88cdfe44d83210edaba78b0d39dc8d9a22e48e10ca0ac09571a00029ee6f,PodSandboxId:58de9ae8986b32feed1bc9622f3cf4e96e89c158c33c7dc775d37d0db877a401,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766002500588942667,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 407abbe38798ca9061639a4524b64a05,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647f17c762ef6d9dff9824a263c655d00b5a40d72ad232143a36bac0c4fa01ae,PodSandboxId:5391c6b2c756fba826794967b2c72c68f014cbd80a7c5a6e44010e764e77c544,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,Crea
tedAt:1766002500539619612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a2bff5c7de66471ed0c6b8d0d01c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b77111321b0d3baba289b73cbcde379a7e020013a76f08b4123b144b8a425f,PodSandboxId:1a4ea243d8441cb22c89945324afcd865b1e7db208c3838758670f7ca132808c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766002500524469779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ff6a1d3d24c29bb7add499f54de8b0,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a143eb28dd7e0c1d9b3e41145f6f4b26b348cb749883188270bb0d98bdd24c2,PodSandboxId:66ccc3da74305f6d7fe127ff4a1818c11d54ae2ad35aac80f22d7ebeebb74d9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766002500457162435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-900733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b42b37b4479e90f23f89e336d705e3,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91307337-a5ad-47d0-bdad-7b33f049f221 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	db650e3df50af       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   4 seconds ago       Running             coredns                   1                   62174c3609b79       coredns-66bc5c9577-dwfn6                      kube-system
	534bcc85d5187       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   02baa49adb393       storage-provisioner                           kube-system
	a0cd474315722       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   12 seconds ago      Running             kube-proxy                1                   046efb8fc6c43       kube-proxy-jdcq4                              kube-system
	18db88cdfe44d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   15 seconds ago      Running             etcd                      1                   58de9ae8986b3       etcd-test-preload-900733                      kube-system
	647f17c762ef6       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   15 seconds ago      Running             kube-scheduler            1                   5391c6b2c756f       kube-scheduler-test-preload-900733            kube-system
	28b77111321b0       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   15 seconds ago      Running             kube-controller-manager   1                   1a4ea243d8441       kube-controller-manager-test-preload-900733   kube-system
	4a143eb28dd7e       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   16 seconds ago      Running             kube-apiserver            1                   66ccc3da74305       kube-apiserver-test-preload-900733            kube-system
	
	
	==> coredns [db650e3df50afe6a0f38365d7cf2c40063fe5043732a35fab9f6fcdc3241b639] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39608 - 50700 "HINFO IN 223561105272186720.7634559288246329128. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.046889609s
	
	
	==> describe nodes <==
	Name:               test-preload-900733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-900733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=test-preload-900733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_13_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:13:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-900733
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:15:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:15:13 +0000   Wed, 17 Dec 2025 20:13:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:15:13 +0000   Wed, 17 Dec 2025 20:13:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:15:13 +0000   Wed, 17 Dec 2025 20:13:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:15:13 +0000   Wed, 17 Dec 2025 20:15:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    test-preload-900733
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 80cbdbb3952c4bfcaed3ce535da4852e
	  System UUID:                80cbdbb3-952c-4bfc-aed3-ce535da4852e
	  Boot ID:                    04aa3a02-273a-4ca2-b8b9-35e6b4622b94
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-dwfn6                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     86s
	  kube-system                 etcd-test-preload-900733                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         91s
	  kube-system                 kube-apiserver-test-preload-900733             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-test-preload-900733    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-jdcq4                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-test-preload-900733             100m (5%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 84s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Normal   NodeHasSufficientMemory  91s                kubelet          Node test-preload-900733 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    91s                kubelet          Node test-preload-900733 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s                kubelet          Node test-preload-900733 status is now: NodeHasSufficientPID
	  Normal   Starting                 91s                kubelet          Starting kubelet.
	  Normal   NodeReady                90s                kubelet          Node test-preload-900733 status is now: NodeReady
	  Normal   RegisteredNode           87s                node-controller  Node test-preload-900733 event: Registered Node test-preload-900733 in Controller
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node test-preload-900733 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node test-preload-900733 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node test-preload-900733 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 13s                kubelet          Node test-preload-900733 has been rebooted, boot id: 04aa3a02-273a-4ca2-b8b9-35e6b4622b94
	  Normal   RegisteredNode           10s                node-controller  Node test-preload-900733 event: Registered Node test-preload-900733 in Controller
	
	
	==> dmesg <==
	[Dec17 20:14] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005203] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.973658] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.101117] kauditd_printk_skb: 88 callbacks suppressed
	[Dec17 20:15] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.000269] kauditd_printk_skb: 128 callbacks suppressed
	
	
	==> etcd [18db88cdfe44d83210edaba78b0d39dc8d9a22e48e10ca0ac09571a00029ee6f] <==
	{"level":"warn","ts":"2025-12-17T20:15:02.407348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.419971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.428962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.442620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.456091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.462503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.472795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.483490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.497255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.506949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.520159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.532054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.547073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.556257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.571167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.588498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.597795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.619439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.626827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.660754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.684695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.689557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.702074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.715179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:15:02.769602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35734","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:15:16 up 0 min,  0 users,  load average: 0.57, 0.16, 0.05
	Linux test-preload-900733 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4a143eb28dd7e0c1d9b3e41145f6f4b26b348cb749883188270bb0d98bdd24c2] <==
	I1217 20:15:03.385845       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:15:03.387485       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 20:15:03.387617       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 20:15:03.387968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 20:15:03.393496       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 20:15:03.393534       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 20:15:03.393993       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:15:03.394098       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 20:15:03.394105       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 20:15:03.394169       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1217 20:15:03.410358       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 20:15:03.435853       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:15:03.443081       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:15:03.452473       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:15:03.452549       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:15:03.463913       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:15:03.735277       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:15:04.290547       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 20:15:04.873068       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:15:04.916874       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 20:15:04.951542       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:15:04.958653       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:15:07.120218       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:15:07.218599       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:15:07.270700       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [28b77111321b0d3baba289b73cbcde379a7e020013a76f08b4123b144b8a425f] <==
	I1217 20:15:06.769979       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 20:15:06.770071       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 20:15:06.770195       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 20:15:06.770201       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 20:15:06.770206       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 20:15:06.772777       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:15:06.784898       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 20:15:06.784998       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 20:15:06.784963       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 20:15:06.784976       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 20:15:06.788264       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 20:15:06.791615       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 20:15:06.791628       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 20:15:06.794893       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:15:06.796167       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 20:15:06.796276       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 20:15:06.800479       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 20:15:06.802791       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 20:15:06.804167       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 20:15:06.805569       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 20:15:06.811022       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 20:15:06.816498       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:15:06.816508       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 20:15:06.816514       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 20:15:16.723955       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a0cd474315722139cb9399c7ccbb2c3da3afab64a7f738376b698b2c3f7ad470] <==
	I1217 20:15:04.279127       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:15:04.379630       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:15:04.379683       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.26"]
	E1217 20:15:04.379759       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:15:04.440818       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 20:15:04.440998       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 20:15:04.441189       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:15:04.464160       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:15:04.464583       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:15:04.464612       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:15:04.471523       1 config.go:200] "Starting service config controller"
	I1217 20:15:04.471630       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:15:04.471965       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:15:04.472059       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:15:04.472181       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:15:04.472272       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:15:04.473970       1 config.go:309] "Starting node config controller"
	I1217 20:15:04.474086       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:15:04.474109       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:15:04.572468       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:15:04.572553       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:15:04.572578       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [647f17c762ef6d9dff9824a263c655d00b5a40d72ad232143a36bac0c4fa01ae] <==
	I1217 20:15:01.416036       1 serving.go:386] Generated self-signed cert in-memory
	W1217 20:15:03.346976       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:15:03.347110       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 20:15:03.347138       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:15:03.347176       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:15:03.378807       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 20:15:03.378850       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:15:03.380856       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:15:03.380882       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:15:03.381423       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:15:03.381492       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:15:03.481415       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: I1217 20:15:03.491749    1181 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-900733"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: E1217 20:15:03.499559    1181 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-900733\" already exists" pod="kube-system/kube-controller-manager-test-preload-900733"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: I1217 20:15:03.631408    1181 apiserver.go:52] "Watching apiserver"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: E1217 20:15:03.636215    1181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-dwfn6" podUID="e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: I1217 20:15:03.653554    1181 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: E1217 20:15:03.706663    1181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: I1217 20:15:03.729243    1181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865-lib-modules\") pod \"kube-proxy-jdcq4\" (UID: \"5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865\") " pod="kube-system/kube-proxy-jdcq4"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: I1217 20:15:03.729308    1181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c8a6b0f7-1db5-46fb-88d7-ca961858efff-tmp\") pod \"storage-provisioner\" (UID: \"c8a6b0f7-1db5-46fb-88d7-ca961858efff\") " pod="kube-system/storage-provisioner"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: I1217 20:15:03.729329    1181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865-xtables-lock\") pod \"kube-proxy-jdcq4\" (UID: \"5cdc53e6-3eb1-472b-a8c7-ae6ddd51a865\") " pod="kube-system/kube-proxy-jdcq4"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: E1217 20:15:03.729728    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: E1217 20:15:03.729820    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da-config-volume podName:e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da nodeName:}" failed. No retries permitted until 2025-12-17 20:15:04.229801196 +0000 UTC m=+5.705178302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da-config-volume") pod "coredns-66bc5c9577-dwfn6" (UID: "e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da") : object "kube-system"/"coredns" not registered
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: I1217 20:15:03.787561    1181 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-900733"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: I1217 20:15:03.788520    1181 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-900733"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: E1217 20:15:03.803193    1181 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-900733\" already exists" pod="kube-system/etcd-test-preload-900733"
	Dec 17 20:15:03 test-preload-900733 kubelet[1181]: E1217 20:15:03.806107    1181 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-900733\" already exists" pod="kube-system/kube-scheduler-test-preload-900733"
	Dec 17 20:15:04 test-preload-900733 kubelet[1181]: E1217 20:15:04.232238    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 20:15:04 test-preload-900733 kubelet[1181]: E1217 20:15:04.232360    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da-config-volume podName:e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da nodeName:}" failed. No retries permitted until 2025-12-17 20:15:05.232345645 +0000 UTC m=+6.707722739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da-config-volume") pod "coredns-66bc5c9577-dwfn6" (UID: "e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da") : object "kube-system"/"coredns" not registered
	Dec 17 20:15:05 test-preload-900733 kubelet[1181]: E1217 20:15:05.242629    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 20:15:05 test-preload-900733 kubelet[1181]: E1217 20:15:05.242713    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da-config-volume podName:e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da nodeName:}" failed. No retries permitted until 2025-12-17 20:15:07.242697339 +0000 UTC m=+8.718074434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da-config-volume") pod "coredns-66bc5c9577-dwfn6" (UID: "e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da") : object "kube-system"/"coredns" not registered
	Dec 17 20:15:05 test-preload-900733 kubelet[1181]: E1217 20:15:05.713260    1181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-dwfn6" podUID="e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da"
	Dec 17 20:15:07 test-preload-900733 kubelet[1181]: E1217 20:15:07.258069    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 20:15:07 test-preload-900733 kubelet[1181]: E1217 20:15:07.258208    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da-config-volume podName:e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da nodeName:}" failed. No retries permitted until 2025-12-17 20:15:11.25818534 +0000 UTC m=+12.733562450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da-config-volume") pod "coredns-66bc5c9577-dwfn6" (UID: "e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da") : object "kube-system"/"coredns" not registered
	Dec 17 20:15:07 test-preload-900733 kubelet[1181]: E1217 20:15:07.712883    1181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-dwfn6" podUID="e5364f0e-fbe2-4459-8fe7-2bb3f0eef3da"
	Dec 17 20:15:08 test-preload-900733 kubelet[1181]: E1217 20:15:08.712040    1181 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766002508710026437  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 17 20:15:08 test-preload-900733 kubelet[1181]: E1217 20:15:08.712085    1181 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766002508710026437  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	
	
	==> storage-provisioner [534bcc85d5187d0325533972568217d98e38929894e24d13e9d4fc1c8559864e] <==
	I1217 20:15:04.176480       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-900733 -n test-preload-900733
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-900733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-900733" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-900733
--- FAIL: TestPreload (138.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (60.08s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-722044 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1217 20:23:16.195224    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-722044 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.773471716s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-722044] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-722044" primary control-plane node in "pause-722044" cluster
	* Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-722044" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:23:11.585591   41240 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:23:11.585711   41240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:23:11.585719   41240 out.go:374] Setting ErrFile to fd 2...
	I1217 20:23:11.585723   41240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:23:11.585888   41240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:23:11.586304   41240 out.go:368] Setting JSON to false
	I1217 20:23:11.587188   41240 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3931,"bootTime":1765999061,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:23:11.587245   41240 start.go:143] virtualization: kvm guest
	I1217 20:23:11.589214   41240 out.go:179] * [pause-722044] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:23:11.590320   41240 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:23:11.590320   41240 notify.go:221] Checking for updates...
	I1217 20:23:11.592472   41240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:23:11.593536   41240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 20:23:11.594481   41240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 20:23:11.598746   41240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:23:11.599860   41240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:23:11.601252   41240 config.go:182] Loaded profile config "pause-722044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:11.601802   41240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:23:11.634558   41240 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 20:23:11.635657   41240 start.go:309] selected driver: kvm2
	I1217 20:23:11.635676   41240 start.go:927] validating driver "kvm2" against &{Name:pause-722044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.3 ClusterName:pause-722044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.108 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:23:11.635861   41240 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:23:11.637372   41240 cni.go:84] Creating CNI manager for ""
	I1217 20:23:11.637466   41240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 20:23:11.637557   41240 start.go:353] cluster config:
	{Name:pause-722044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-722044 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.108 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:23:11.637742   41240 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:23:11.639266   41240 out.go:179] * Starting "pause-722044" primary control-plane node in "pause-722044" cluster
	I1217 20:23:11.640244   41240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:23:11.640269   41240 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:23:11.640287   41240 cache.go:65] Caching tarball of preloaded images
	I1217 20:23:11.640369   41240 preload.go:238] Found /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:23:11.640382   41240 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:23:11.640480   41240 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/config.json ...
	I1217 20:23:11.640688   41240 start.go:360] acquireMachinesLock for pause-722044: {Name:mk03890d04d41d66ccbc23571d0f065ba20ffda0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 20:23:11.640728   41240 start.go:364] duration metric: took 23.01µs to acquireMachinesLock for "pause-722044"
	I1217 20:23:11.640745   41240 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:23:11.640754   41240 fix.go:54] fixHost starting: 
	I1217 20:23:11.642285   41240 fix.go:112] recreateIfNeeded on pause-722044: state=Running err=<nil>
	W1217 20:23:11.642303   41240 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:23:11.643497   41240 out.go:252] * Updating the running kvm2 "pause-722044" VM ...
	I1217 20:23:11.643535   41240 machine.go:94] provisionDockerMachine start ...
	I1217 20:23:11.645878   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:11.646332   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:11.646369   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:11.646547   41240 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:11.646771   41240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.108 22 <nil> <nil>}
	I1217 20:23:11.646783   41240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:23:11.768865   41240 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-722044
	
	I1217 20:23:11.768897   41240 buildroot.go:166] provisioning hostname "pause-722044"
	I1217 20:23:11.772235   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:11.772683   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:11.772714   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:11.772960   41240 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:11.773241   41240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.108 22 <nil> <nil>}
	I1217 20:23:11.773263   41240 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-722044 && echo "pause-722044" | sudo tee /etc/hostname
	I1217 20:23:11.914599   41240 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-722044
	
	I1217 20:23:11.917618   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:11.918101   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:11.918144   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:11.918376   41240 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:11.918646   41240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.108 22 <nil> <nil>}
	I1217 20:23:11.918664   41240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-722044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-722044/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-722044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:23:12.036011   41240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:23:12.036047   41240 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22186-3611/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-3611/.minikube}
	I1217 20:23:12.036083   41240 buildroot.go:174] setting up certificates
	I1217 20:23:12.036093   41240 provision.go:84] configureAuth start
	I1217 20:23:12.039151   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:12.039555   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:12.039575   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:12.041773   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:12.042110   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:12.042130   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:12.042290   41240 provision.go:143] copyHostCerts
	I1217 20:23:12.042348   41240 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem, removing ...
	I1217 20:23:12.042367   41240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem
	I1217 20:23:12.042432   41240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem (1082 bytes)
	I1217 20:23:12.042578   41240 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem, removing ...
	I1217 20:23:12.042591   41240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem
	I1217 20:23:12.042636   41240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem (1123 bytes)
	I1217 20:23:12.042735   41240 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem, removing ...
	I1217 20:23:12.042755   41240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem
	I1217 20:23:12.042789   41240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem (1679 bytes)
	I1217 20:23:12.042874   41240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem org=jenkins.pause-722044 san=[127.0.0.1 192.168.61.108 localhost minikube pause-722044]
	I1217 20:23:12.176040   41240 provision.go:177] copyRemoteCerts
	I1217 20:23:12.176096   41240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:23:12.179037   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:12.179522   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:12.179567   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:12.179716   41240 sshutil.go:53] new ssh client: &{IP:192.168.61.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/pause-722044/id_rsa Username:docker}
	I1217 20:23:12.269723   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:23:12.306364   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 20:23:12.342537   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:23:12.377410   41240 provision.go:87] duration metric: took 341.305125ms to configureAuth
	I1217 20:23:12.377444   41240 buildroot.go:189] setting minikube options for container-runtime
	I1217 20:23:12.377748   41240 config.go:182] Loaded profile config "pause-722044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:12.380823   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:12.381335   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:12.381365   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:12.381591   41240 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:12.381862   41240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.108 22 <nil> <nil>}
	I1217 20:23:12.381883   41240 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:23:17.990554   41240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:23:17.990584   41240 machine.go:97] duration metric: took 6.347040839s to provisionDockerMachine
	I1217 20:23:17.990605   41240 start.go:293] postStartSetup for "pause-722044" (driver="kvm2")
	I1217 20:23:17.990618   41240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:23:17.990712   41240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:23:17.994317   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:17.994875   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:17.994909   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:17.995090   41240 sshutil.go:53] new ssh client: &{IP:192.168.61.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/pause-722044/id_rsa Username:docker}
	I1217 20:23:18.085889   41240 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:23:18.092666   41240 info.go:137] Remote host: Buildroot 2025.02
	I1217 20:23:18.092706   41240 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/addons for local assets ...
	I1217 20:23:18.092801   41240 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/files for local assets ...
	I1217 20:23:18.092922   41240 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem -> 75312.pem in /etc/ssl/certs
	I1217 20:23:18.093076   41240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:23:18.105117   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem --> /etc/ssl/certs/75312.pem (1708 bytes)
	I1217 20:23:18.140843   41240 start.go:296] duration metric: took 150.223426ms for postStartSetup
	I1217 20:23:18.140887   41240 fix.go:56] duration metric: took 6.500133591s for fixHost
	I1217 20:23:18.144003   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:18.144560   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:18.144585   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:18.144783   41240 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:18.145043   41240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.108 22 <nil> <nil>}
	I1217 20:23:18.145060   41240 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 20:23:18.268873   41240 main.go:143] libmachine: SSH cmd err, output: <nil>: 1766002998.261953352
	
	I1217 20:23:18.268899   41240 fix.go:216] guest clock: 1766002998.261953352
	I1217 20:23:18.268908   41240 fix.go:229] Guest: 2025-12-17 20:23:18.261953352 +0000 UTC Remote: 2025-12-17 20:23:18.140891182 +0000 UTC m=+6.611521495 (delta=121.06217ms)
	I1217 20:23:18.268928   41240 fix.go:200] guest clock delta is within tolerance: 121.06217ms
	I1217 20:23:18.268935   41240 start.go:83] releasing machines lock for "pause-722044", held for 6.628196762s
	I1217 20:23:18.272017   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:18.272611   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:18.272635   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:18.273219   41240 ssh_runner.go:195] Run: cat /version.json
	I1217 20:23:18.273280   41240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:23:18.276915   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:18.277354   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:18.277377   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:18.277383   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:18.277637   41240 sshutil.go:53] new ssh client: &{IP:192.168.61.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/pause-722044/id_rsa Username:docker}
	I1217 20:23:18.277992   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:18.278036   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:18.278274   41240 sshutil.go:53] new ssh client: &{IP:192.168.61.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/pause-722044/id_rsa Username:docker}
	I1217 20:23:18.361274   41240 ssh_runner.go:195] Run: systemctl --version
	I1217 20:23:18.390988   41240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:23:18.540556   41240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:23:18.550010   41240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:23:18.550109   41240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:23:18.566583   41240 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:23:18.566611   41240 start.go:496] detecting cgroup driver to use...
	I1217 20:23:18.566706   41240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:23:18.589306   41240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:23:18.607927   41240 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:23:18.607994   41240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:23:18.627927   41240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:23:18.644965   41240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:23:18.836744   41240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:23:19.005273   41240 docker.go:234] disabling docker service ...
	I1217 20:23:19.005356   41240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:23:19.037829   41240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:23:19.054821   41240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:23:19.272332   41240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:23:19.445796   41240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:23:19.462258   41240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:23:19.486360   41240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:23:19.486419   41240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:19.498640   41240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:23:19.498696   41240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:19.511045   41240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:19.523595   41240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:19.535874   41240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:23:19.549714   41240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:19.562636   41240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:19.575488   41240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:19.587210   41240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:23:19.597385   41240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:23:19.609218   41240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:23:19.789026   41240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:23:22.761687   41240 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.972623852s)
	I1217 20:23:22.761721   41240 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:23:22.761772   41240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:23:22.768632   41240 start.go:564] Will wait 60s for crictl version
	I1217 20:23:22.768691   41240 ssh_runner.go:195] Run: which crictl
	I1217 20:23:22.772885   41240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 20:23:22.810573   41240 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 20:23:22.810639   41240 ssh_runner.go:195] Run: crio --version
	I1217 20:23:22.879002   41240 ssh_runner.go:195] Run: crio --version
	I1217 20:23:22.953896   41240 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 20:23:22.958446   41240 main.go:143] libmachine: domain pause-722044 has defined MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:22.959006   41240 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:c1:a0", ip: ""} in network mk-pause-722044: {Iface:virbr3 ExpiryTime:2025-12-17 21:22:28 +0000 UTC Type:0 Mac:52:54:00:19:c1:a0 Iaid: IPaddr:192.168.61.108 Prefix:24 Hostname:pause-722044 Clientid:01:52:54:00:19:c1:a0}
	I1217 20:23:22.959032   41240 main.go:143] libmachine: domain pause-722044 has defined IP address 192.168.61.108 and MAC address 52:54:00:19:c1:a0 in network mk-pause-722044
	I1217 20:23:22.959281   41240 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1217 20:23:22.969489   41240 kubeadm.go:884] updating cluster {Name:pause-722044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-722044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.108 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:23:22.969669   41240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:23:22.969735   41240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:23:23.163423   41240 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:23:23.163451   41240 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:23:23.163537   41240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:23:23.265360   41240 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:23:23.265389   41240 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:23:23.265397   41240 kubeadm.go:935] updating node { 192.168.61.108 8443 v1.34.3 crio true true} ...
	I1217 20:23:23.265559   41240 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-722044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-722044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:23:23.265661   41240 ssh_runner.go:195] Run: crio config
	I1217 20:23:23.365692   41240 cni.go:84] Creating CNI manager for ""
	I1217 20:23:23.365725   41240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 20:23:23.365745   41240 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:23:23.365781   41240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.108 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-722044 NodeName:pause-722044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:23:23.365983   41240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-722044"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.108"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.108"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:23:23.366077   41240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:23:23.386378   41240 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:23:23.386447   41240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:23:23.416605   41240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1217 20:23:23.463791   41240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:23:23.506698   41240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1217 20:23:23.537872   41240 ssh_runner.go:195] Run: grep 192.168.61.108	control-plane.minikube.internal$ /etc/hosts
	I1217 20:23:23.544785   41240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:23:23.904669   41240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:23:23.974263   41240 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044 for IP: 192.168.61.108
	I1217 20:23:23.974305   41240 certs.go:195] generating shared ca certs ...
	I1217 20:23:23.974328   41240 certs.go:227] acquiring lock for ca certs: {Name:mka9d751f3e3cbcb654d1f1d24f2b10b27bc58a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:23:23.974588   41240 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key
	I1217 20:23:23.974685   41240 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key
	I1217 20:23:23.974714   41240 certs.go:257] generating profile certs ...
	I1217 20:23:23.974855   41240 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/client.key
	I1217 20:23:23.974948   41240 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/apiserver.key.ad183998
	I1217 20:23:23.975016   41240 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/proxy-client.key
	I1217 20:23:23.975189   41240 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531.pem (1338 bytes)
	W1217 20:23:23.975270   41240 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531_empty.pem, impossibly tiny 0 bytes
	I1217 20:23:23.975288   41240 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:23:23.975326   41240 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:23:23.975365   41240 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:23:23.975399   41240 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem (1679 bytes)
	I1217 20:23:23.975473   41240 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem (1708 bytes)
	I1217 20:23:23.976356   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:23:24.035730   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:23:24.135679   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:23:24.213675   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:23:24.268567   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 20:23:24.336559   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:23:24.387827   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:23:24.446219   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 20:23:24.557109   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:23:24.607771   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531.pem --> /usr/share/ca-certificates/7531.pem (1338 bytes)
	I1217 20:23:24.657766   41240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem --> /usr/share/ca-certificates/75312.pem (1708 bytes)
	I1217 20:23:24.716291   41240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:23:24.763312   41240 ssh_runner.go:195] Run: openssl version
	I1217 20:23:24.784722   41240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:23:24.830403   41240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:23:24.870647   41240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:23:24.889495   41240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:23:24.889572   41240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:23:24.902499   41240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:23:24.921766   41240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7531.pem
	I1217 20:23:24.948013   41240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7531.pem /etc/ssl/certs/7531.pem
	I1217 20:23:24.969491   41240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7531.pem
	I1217 20:23:24.978666   41240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/7531.pem
	I1217 20:23:24.978721   41240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7531.pem
	I1217 20:23:24.990165   41240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:23:25.022908   41240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75312.pem
	I1217 20:23:25.045641   41240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75312.pem /etc/ssl/certs/75312.pem
	I1217 20:23:25.081758   41240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75312.pem
	I1217 20:23:25.108656   41240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/75312.pem
	I1217 20:23:25.108724   41240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75312.pem
	I1217 20:23:25.128390   41240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:23:25.154715   41240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:23:25.167990   41240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:23:25.180612   41240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:23:25.189791   41240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:23:25.200401   41240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:23:25.211743   41240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:23:25.223335   41240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:23:25.235010   41240 kubeadm.go:401] StartCluster: {Name:pause-722044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-722044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.108 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:23:25.235143   41240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:23:25.235216   41240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:23:25.303568   41240 cri.go:89] found id: "3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd"
	I1217 20:23:25.303591   41240 cri.go:89] found id: "00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b"
	I1217 20:23:25.303597   41240 cri.go:89] found id: "288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17"
	I1217 20:23:25.303602   41240 cri.go:89] found id: "1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506"
	I1217 20:23:25.303606   41240 cri.go:89] found id: "96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb"
	I1217 20:23:25.303610   41240 cri.go:89] found id: "eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9"
	I1217 20:23:25.303614   41240 cri.go:89] found id: "d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb"
	I1217 20:23:25.303619   41240 cri.go:89] found id: "458be37c9164dc0dfbd59b0b8dcd61a892bf0878a72ca6f6387f5b534e8724ca"
	I1217 20:23:25.303623   41240 cri.go:89] found id: "57b5dad3a6eb199d74ed65b35e1c272c026349deacd961f4b0ab358df4b1767a"
	I1217 20:23:25.303632   41240 cri.go:89] found id: "4d2d7a7c7ff3887b933004ee0d6287b3244e6c54069ad29faa074d1cf1e142fa"
	I1217 20:23:25.303636   41240 cri.go:89] found id: "4ecea3017ab351f428174d13d98abba7177414280659de95b9d0c5042ef461cb"
	I1217 20:23:25.303640   41240 cri.go:89] found id: "e6ead26278179f9e5597d5e890d711d7382a9ccec643ef3635c4f23c71576ee7"
	I1217 20:23:25.303645   41240 cri.go:89] found id: ""
	I1217 20:23:25.303693   41240 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-722044 -n pause-722044
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-722044 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-722044 logs -n 25: (1.294727795s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p cert-options-597207                                                                                                                                      │ cert-options-597207       │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │ 17 Dec 25 20:19 UTC │
	│ ssh     │ -p NoKubernetes-680060 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │                     │
	│ start   │ -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │ 17 Dec 25 20:20 UTC │
	│ stop    │ -p NoKubernetes-680060                                                                                                                                      │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │ 17 Dec 25 20:19 UTC │
	│ start   │ -p NoKubernetes-680060 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │ 17 Dec 25 20:20 UTC │
	│ start   │ -p running-upgrade-824542 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-824542    │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-813074                                                                                                                                │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │ 17 Dec 25 20:20 UTC │
	│ ssh     │ -p NoKubernetes-680060 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │                     │
	│ delete  │ -p NoKubernetes-680060                                                                                                                                      │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │ 17 Dec 25 20:20 UTC │
	│ start   │ -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                 │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p stopped-upgrade-897195 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ stopped-upgrade-897195    │ jenkins │ v1.35.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ start   │ -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                 │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:22 UTC │
	│ stop    │ stopped-upgrade-897195 stop                                                                                                                                 │ stopped-upgrade-897195    │ jenkins │ v1.35.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p stopped-upgrade-897195 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-897195    │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:22 UTC │
	│ delete  │ -p kubernetes-upgrade-813074                                                                                                                                │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:22 UTC │
	│ start   │ -p pause-722044 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-722044              │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:23 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-897195 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-897195    │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │                     │
	│ delete  │ -p stopped-upgrade-897195                                                                                                                                   │ stopped-upgrade-897195    │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:22 UTC │
	│ start   │ -p auto-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-698465               │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:23 UTC │
	│ start   │ -p cert-expiration-229742 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-229742    │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:23 UTC │
	│ start   │ -p pause-722044 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-722044              │ jenkins │ v1.37.0 │ 17 Dec 25 20:23 UTC │ 17 Dec 25 20:24 UTC │
	│ delete  │ -p cert-expiration-229742                                                                                                                                   │ cert-expiration-229742    │ jenkins │ v1.37.0 │ 17 Dec 25 20:23 UTC │ 17 Dec 25 20:23 UTC │
	│ start   │ -p kindnet-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-698465            │ jenkins │ v1.37.0 │ 17 Dec 25 20:23 UTC │                     │
	│ ssh     │ -p auto-698465 pgrep -a kubelet                                                                                                                             │ auto-698465               │ jenkins │ v1.37.0 │ 17 Dec 25 20:23 UTC │ 17 Dec 25 20:23 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:23:37
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:23:37.738608   41406 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:23:37.738726   41406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:23:37.738734   41406 out.go:374] Setting ErrFile to fd 2...
	I1217 20:23:37.738738   41406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:23:37.738923   41406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:23:37.739377   41406 out.go:368] Setting JSON to false
	I1217 20:23:37.740230   41406 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3957,"bootTime":1765999061,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:23:37.740291   41406 start.go:143] virtualization: kvm guest
	I1217 20:23:37.742214   41406 out.go:179] * [kindnet-698465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:23:37.743374   41406 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:23:37.743371   41406 notify.go:221] Checking for updates...
	I1217 20:23:37.744574   41406 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:23:37.745743   41406 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 20:23:37.746882   41406 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 20:23:37.747977   41406 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:23:37.752694   41406 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:23:37.754075   41406 config.go:182] Loaded profile config "auto-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:37.754180   41406 config.go:182] Loaded profile config "guest-867309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1217 20:23:37.754316   41406 config.go:182] Loaded profile config "pause-722044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:37.754423   41406 config.go:182] Loaded profile config "running-upgrade-824542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 20:23:37.754541   41406 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:23:37.788856   41406 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 20:23:37.789762   41406 start.go:309] selected driver: kvm2
	I1217 20:23:37.789776   41406 start.go:927] validating driver "kvm2" against <nil>
	I1217 20:23:37.789787   41406 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:23:37.790597   41406 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:23:37.790828   41406 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:23:37.790853   41406 cni.go:84] Creating CNI manager for "kindnet"
	I1217 20:23:37.790858   41406 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:23:37.790896   41406 start.go:353] cluster config:
	{Name:kindnet-698465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:23:37.790977   41406 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:23:37.792213   41406 out.go:179] * Starting "kindnet-698465" primary control-plane node in "kindnet-698465" cluster
	I1217 20:23:37.793120   41406 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:23:37.793150   41406 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:23:37.793160   41406 cache.go:65] Caching tarball of preloaded images
	I1217 20:23:37.793243   41406 preload.go:238] Found /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:23:37.793261   41406 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:23:37.793336   41406 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/config.json ...
	I1217 20:23:37.793354   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/config.json: {Name:mk1a7b2e322d257130e0cb198c67e12a9ac9a0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:23:37.793479   41406 start.go:360] acquireMachinesLock for kindnet-698465: {Name:mk03890d04d41d66ccbc23571d0f065ba20ffda0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 20:23:37.793515   41406 start.go:364] duration metric: took 22.259µs to acquireMachinesLock for "kindnet-698465"
	I1217 20:23:37.793567   41406 start.go:93] Provisioning new machine with config: &{Name:kindnet-698465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.3 ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:23:37.793631   41406 start.go:125] createHost starting for "" (driver="kvm2")
	W1217 20:23:35.856116   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	W1217 20:23:38.357022   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	I1217 20:23:35.858233   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:35.858254   39298 cri.go:89] found id: ""
	I1217 20:23:35.858264   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:35.858327   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.863872   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:35.863941   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:35.912058   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:35.912082   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:35.912088   39298 cri.go:89] found id: ""
	I1217 20:23:35.912097   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:35.912152   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.917700   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.923109   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:35.923189   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:35.968314   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:35.968346   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:35.968353   39298 cri.go:89] found id: ""
	I1217 20:23:35.968362   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:35.968423   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.974010   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.979201   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:35.979278   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:36.024228   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:36.024253   39298 cri.go:89] found id: ""
	I1217 20:23:36.024263   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:36.024324   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:36.028787   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:36.028856   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:36.077010   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:36.077032   39298 cri.go:89] found id: ""
	I1217 20:23:36.077041   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:36.077098   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:36.081463   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:36.081539   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:36.129947   39298 cri.go:89] found id: ""
	I1217 20:23:36.129981   39298 logs.go:282] 0 containers: []
	W1217 20:23:36.129991   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:36.129999   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:36.130062   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:36.165790   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:36.165819   39298 cri.go:89] found id: ""
	I1217 20:23:36.165830   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:36.165893   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:36.170698   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:36.170772   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:36.225282   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:36.225311   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:36.261773   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:36.261810   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:36.303724   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:36.303752   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:36.388426   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:36.388457   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:36.437348   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:36.437376   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:36.503779   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:36.503806   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:36.600383   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:36.600421   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:36.617428   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:36.617467   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:36.703325   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:36.703347   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:36.703363   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:36.749987   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:36.750017   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:36.793183   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:36.793212   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:36.839944   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:36.839992   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:37.203837   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:37.203883   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:39.750223   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:39.750920   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:39.750973   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:39.751020   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:39.800091   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:39.800128   39298 cri.go:89] found id: ""
	I1217 20:23:39.800138   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:39.800223   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.804642   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:39.804696   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:39.845637   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:39.845666   39298 cri.go:89] found id: ""
	I1217 20:23:39.845677   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:39.845753   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.850412   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:39.850491   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:39.893682   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:39.893710   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:39.893718   39298 cri.go:89] found id: ""
	I1217 20:23:39.893729   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:39.893800   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.898824   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.903398   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:39.903459   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:39.948298   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:39.948323   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:39.948328   39298 cri.go:89] found id: ""
	I1217 20:23:39.948338   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:39.948406   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.952874   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.957248   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:39.957323   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:39.997279   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:39.997309   39298 cri.go:89] found id: ""
	I1217 20:23:39.997324   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:39.997401   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:40.002236   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:40.002315   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:40.046874   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:40.046898   39298 cri.go:89] found id: ""
	I1217 20:23:40.046908   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:40.046980   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:40.051413   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:40.051479   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:40.091909   39298 cri.go:89] found id: ""
	I1217 20:23:40.091962   39298 logs.go:282] 0 containers: []
	W1217 20:23:40.091976   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:40.091984   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:40.092056   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:40.129051   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:40.129071   39298 cri.go:89] found id: ""
	I1217 20:23:40.129081   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:40.129148   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:40.133839   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:40.133864   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:40.253793   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:40.253838   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:40.338176   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:40.338207   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:40.338227   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:40.379739   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:40.379772   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:40.418780   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:40.418808   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:40.779235   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:40.779299   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:40.828209   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:40.828240   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:40.845344   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:40.845392   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:37.794871   41406 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1217 20:23:37.795027   41406 start.go:159] libmachine.API.Create for "kindnet-698465" (driver="kvm2")
	I1217 20:23:37.795055   41406 client.go:173] LocalClient.Create starting
	I1217 20:23:37.795124   41406 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem
	I1217 20:23:37.795152   41406 main.go:143] libmachine: Decoding PEM data...
	I1217 20:23:37.795170   41406 main.go:143] libmachine: Parsing certificate...
	I1217 20:23:37.795212   41406 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem
	I1217 20:23:37.795231   41406 main.go:143] libmachine: Decoding PEM data...
	I1217 20:23:37.795241   41406 main.go:143] libmachine: Parsing certificate...
	I1217 20:23:37.795562   41406 main.go:143] libmachine: creating domain...
	I1217 20:23:37.795573   41406 main.go:143] libmachine: creating network...
	I1217 20:23:37.796841   41406 main.go:143] libmachine: found existing default network
	I1217 20:23:37.797062   41406 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 20:23:37.797807   41406 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:4c:e0} reservation:<nil>}
	I1217 20:23:37.798686   41406 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dbc5d0}
	I1217 20:23:37.798766   41406 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-698465</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 20:23:37.803402   41406 main.go:143] libmachine: creating private network mk-kindnet-698465 192.168.50.0/24...
	I1217 20:23:37.875031   41406 main.go:143] libmachine: private network mk-kindnet-698465 192.168.50.0/24 created
	I1217 20:23:37.875288   41406 main.go:143] libmachine: <network>
	  <name>mk-kindnet-698465</name>
	  <uuid>5e64cb0f-024e-4f76-9dbe-2ee91a5ae9ff</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:cb:50:4f'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 20:23:37.875315   41406 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465 ...
	I1217 20:23:37.875336   41406 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1217 20:23:37.875359   41406 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 20:23:37.875424   41406 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22186-3611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
	I1217 20:23:38.131543   41406 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa...
	I1217 20:23:38.261413   41406 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/kindnet-698465.rawdisk...
	I1217 20:23:38.261456   41406 main.go:143] libmachine: Writing magic tar header
	I1217 20:23:38.261504   41406 main.go:143] libmachine: Writing SSH key tar header
	I1217 20:23:38.261627   41406 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465 ...
	I1217 20:23:38.261715   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465
	I1217 20:23:38.261760   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465 (perms=drwx------)
	I1217 20:23:38.261785   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines
	I1217 20:23:38.261803   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines (perms=drwxr-xr-x)
	I1217 20:23:38.261821   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 20:23:38.261840   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube (perms=drwxr-xr-x)
	I1217 20:23:38.261856   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611
	I1217 20:23:38.261874   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611 (perms=drwxrwxr-x)
	I1217 20:23:38.261890   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 20:23:38.261907   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 20:23:38.261921   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 20:23:38.261938   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 20:23:38.261952   41406 main.go:143] libmachine: checking permissions on dir: /home
	I1217 20:23:38.261964   41406 main.go:143] libmachine: skipping /home - not owner
	I1217 20:23:38.261971   41406 main.go:143] libmachine: defining domain...
	I1217 20:23:38.263156   41406 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-698465</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/kindnet-698465.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-698465'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 20:23:38.268225   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:02:e9:88 in network default
	I1217 20:23:38.268938   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:38.268963   41406 main.go:143] libmachine: starting domain...
	I1217 20:23:38.268970   41406 main.go:143] libmachine: ensuring networks are active...
	I1217 20:23:38.269763   41406 main.go:143] libmachine: Ensuring network default is active
	I1217 20:23:38.270176   41406 main.go:143] libmachine: Ensuring network mk-kindnet-698465 is active
	I1217 20:23:38.270986   41406 main.go:143] libmachine: getting domain XML...
	I1217 20:23:38.272202   41406 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-698465</name>
	  <uuid>83b10cb7-b452-4767-a21f-3f78a8d775fb</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/kindnet-698465.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:8d:a5:2d'/>
	      <source network='mk-kindnet-698465'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:02:e9:88'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 20:23:39.565207   41406 main.go:143] libmachine: waiting for domain to start...
	I1217 20:23:39.566635   41406 main.go:143] libmachine: domain is now running
	I1217 20:23:39.566652   41406 main.go:143] libmachine: waiting for IP...
	I1217 20:23:39.567329   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:39.568150   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:39.568166   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:39.568510   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:39.568580   41406 retry.go:31] will retry after 200.627988ms: waiting for domain to come up
	I1217 20:23:39.770818   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:39.771511   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:39.771533   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:39.771871   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:39.771901   41406 retry.go:31] will retry after 301.782833ms: waiting for domain to come up
	I1217 20:23:40.075374   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:40.076066   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:40.076092   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:40.076425   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:40.076470   41406 retry.go:31] will retry after 341.853479ms: waiting for domain to come up
	I1217 20:23:40.420366   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:40.421292   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:40.421320   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:40.421773   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:40.421817   41406 retry.go:31] will retry after 393.806601ms: waiting for domain to come up
	I1217 20:23:40.817400   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:40.818221   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:40.818236   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:40.818613   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:40.818648   41406 retry.go:31] will retry after 466.434322ms: waiting for domain to come up
	I1217 20:23:41.286398   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:41.287197   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:41.287218   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:41.287619   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:41.287656   41406 retry.go:31] will retry after 724.641469ms: waiting for domain to come up
	I1217 20:23:42.013423   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:42.014031   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:42.014049   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:42.014430   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:42.014472   41406 retry.go:31] will retry after 798.648498ms: waiting for domain to come up
	W1217 20:23:40.856943   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	W1217 20:23:43.354458   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	I1217 20:23:40.895686   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:40.895717   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:40.942230   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:40.942259   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:40.987633   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:40.987663   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:41.031428   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:41.031454   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:41.113325   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:41.113371   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:41.163589   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:41.163621   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:43.706658   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:43.707367   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:43.707429   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:43.707486   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:43.746181   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:43.746208   39298 cri.go:89] found id: ""
	I1217 20:23:43.746219   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:43.746288   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.750886   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:43.750968   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:43.791179   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:43.791206   39298 cri.go:89] found id: ""
	I1217 20:23:43.791216   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:43.791281   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.795616   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:43.795684   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:43.843187   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:43.843215   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:43.843220   39298 cri.go:89] found id: ""
	I1217 20:23:43.843229   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:43.843307   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.848566   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.853943   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:43.854021   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:43.894724   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:43.894749   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:43.894756   39298 cri.go:89] found id: ""
	I1217 20:23:43.894765   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:43.894838   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.900093   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.904373   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:43.904435   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:43.943546   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:43.943567   39298 cri.go:89] found id: ""
	I1217 20:23:43.943576   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:43.943636   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.948687   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:43.948758   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:43.995515   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:43.995564   39298 cri.go:89] found id: ""
	I1217 20:23:43.995577   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:43.995666   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:44.000435   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:44.000511   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:44.038061   39298 cri.go:89] found id: ""
	I1217 20:23:44.038093   39298 logs.go:282] 0 containers: []
	W1217 20:23:44.038106   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:44.038113   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:44.038183   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:44.075102   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:44.075130   39298 cri.go:89] found id: ""
	I1217 20:23:44.075141   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:44.075203   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:44.079820   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:44.079850   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:44.121112   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:44.121148   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:44.160578   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:44.160612   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:44.203545   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:44.203586   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:44.245166   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:44.245195   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:44.284998   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:44.285027   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:44.323309   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:44.323344   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:44.662235   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:44.662274   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:44.709983   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:44.710014   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:44.811658   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:44.811693   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:44.828354   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:44.828418   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:44.911962   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:44.911996   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:44.912013   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:44.964466   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:44.964497   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:45.039251   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:45.039297   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:45.950276   41240 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd 00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b 288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17 1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506 96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9 d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb 458be37c9164dc0dfbd59b0b8dcd61a892bf0878a72ca6f6387f5b534e8724ca 57b5dad3a6eb199d74ed65b35e1c272c026349deacd961f4b0ab358df4b1767a 4d2d7a7c7ff3887b933004ee0d6287b3244e6c54069ad29faa074d1cf1e142fa 4ecea3017ab351f428174d13d98abba7177414280659de95b9d0c5042ef461cb e6ead26278179f9e5597d5e890d711d7382a9ccec643ef3635c4f23c71576ee7: (20.44360538s)
	W1217 20:23:45.950400   41240 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd 00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b 288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17 1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506 96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9 d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb 458be37c9164dc0dfbd59b0b8dcd61a892bf0878a72ca6f6387f5b534e8724ca 57b5dad3a6eb199d74ed65b35e1c272c026349deacd961f4b0ab358df4b1767a 4d2d7a7c7ff3887b933004ee0d6287b3244e6c54069ad29faa074d1cf1e142fa 4ecea3017ab351f428174d13d98abba7177414280659de95b9d0c5042ef461cb e6ead26278179f9e5597d5e890d711d7382a9ccec643ef3635c4f23c71576ee7: Process exited with status 1
	stdout:
	3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd
	00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b
	288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17
	1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506
	96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb
	eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9
	
	stderr:
	E1217 20:23:45.942769    3638 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb\": container with ID starting with d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb not found: ID does not exist" containerID="d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb"
	time="2025-12-17T20:23:45Z" level=fatal msg="stopping the container \"d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb\": rpc error: code = NotFound desc = could not find container \"d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb\": container with ID starting with d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb not found: ID does not exist"
	I1217 20:23:45.950474   41240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 20:23:45.992857   41240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:23:46.007688   41240 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 20:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5642 Dec 17 20:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Dec 17 20:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5586 Dec 17 20:22 /etc/kubernetes/scheduler.conf
	
	I1217 20:23:46.007761   41240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:23:46.021626   41240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:23:46.035047   41240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:23:46.035120   41240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:23:46.049538   41240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:23:46.061208   41240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:23:46.061279   41240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:23:46.073804   41240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:23:46.085256   41240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:23:46.085322   41240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:23:46.099589   41240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:23:46.112608   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:46.172057   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:42.814555   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:42.815167   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:42.815186   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:42.815494   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:42.815553   41406 retry.go:31] will retry after 940.04333ms: waiting for domain to come up
	I1217 20:23:43.757872   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:43.758511   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:43.758535   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:43.758857   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:43.758889   41406 retry.go:31] will retry after 1.733677818s: waiting for domain to come up
	I1217 20:23:45.494104   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:45.494887   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:45.494909   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:45.495262   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:45.495300   41406 retry.go:31] will retry after 2.310490865s: waiting for domain to come up
	W1217 20:23:45.356222   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	W1217 20:23:47.357188   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	I1217 20:23:49.389718   40822 pod_ready.go:94] pod "coredns-66bc5c9577-z6sfq" is "Ready"
	I1217 20:23:49.389753   40822 pod_ready.go:86] duration metric: took 20.040719653s for pod "coredns-66bc5c9577-z6sfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.429169   40822 pod_ready.go:83] waiting for pod "etcd-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.444588   40822 pod_ready.go:94] pod "etcd-auto-698465" is "Ready"
	I1217 20:23:49.444636   40822 pod_ready.go:86] duration metric: took 15.432483ms for pod "etcd-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.448068   40822 pod_ready.go:83] waiting for pod "kube-apiserver-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.455431   40822 pod_ready.go:94] pod "kube-apiserver-auto-698465" is "Ready"
	I1217 20:23:49.455463   40822 pod_ready.go:86] duration metric: took 7.366193ms for pod "kube-apiserver-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.459776   40822 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.552523   40822 pod_ready.go:94] pod "kube-controller-manager-auto-698465" is "Ready"
	I1217 20:23:49.552574   40822 pod_ready.go:86] duration metric: took 92.769387ms for pod "kube-controller-manager-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.754543   40822 pod_ready.go:83] waiting for pod "kube-proxy-hmgj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:50.154099   40822 pod_ready.go:94] pod "kube-proxy-hmgj9" is "Ready"
	I1217 20:23:50.154127   40822 pod_ready.go:86] duration metric: took 399.552989ms for pod "kube-proxy-hmgj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:50.353684   40822 pod_ready.go:83] waiting for pod "kube-scheduler-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:50.754231   40822 pod_ready.go:94] pod "kube-scheduler-auto-698465" is "Ready"
	I1217 20:23:50.754267   40822 pod_ready.go:86] duration metric: took 400.516361ms for pod "kube-scheduler-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:50.754284   40822 pod_ready.go:40] duration metric: took 31.413609777s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:23:50.819003   40822 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:23:50.820791   40822 out.go:179] * Done! kubectl is now configured to use "auto-698465" cluster and "default" namespace by default
	I1217 20:23:47.583607   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:47.584283   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:47.584334   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:47.584387   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:47.641093   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:47.641127   39298 cri.go:89] found id: ""
	I1217 20:23:47.641138   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:47.641207   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.645555   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:47.645639   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:47.687880   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:47.687904   39298 cri.go:89] found id: ""
	I1217 20:23:47.687913   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:47.687978   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.692490   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:47.692582   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:47.735855   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:47.735879   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:47.735884   39298 cri.go:89] found id: ""
	I1217 20:23:47.735894   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:47.735957   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.742277   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.746754   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:47.746829   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:47.796489   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:47.796514   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:47.796520   39298 cri.go:89] found id: ""
	I1217 20:23:47.796540   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:47.796614   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.801804   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.806180   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:47.806258   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:47.846729   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:47.846758   39298 cri.go:89] found id: ""
	I1217 20:23:47.846769   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:47.846832   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.852671   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:47.852744   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:47.900186   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:47.900226   39298 cri.go:89] found id: ""
	I1217 20:23:47.900237   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:47.900302   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.905074   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:47.905162   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:47.951252   39298 cri.go:89] found id: ""
	I1217 20:23:47.951285   39298 logs.go:282] 0 containers: []
	W1217 20:23:47.951298   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:47.951307   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:47.951368   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:48.013107   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:48.013136   39298 cri.go:89] found id: ""
	I1217 20:23:48.013147   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:48.013212   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:48.018857   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:48.018884   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:48.066746   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:48.066785   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:48.119816   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:48.119852   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:48.137355   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:48.137387   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:48.193230   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:48.193281   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:48.244460   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:48.244490   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:48.290486   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:48.290555   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:48.804701   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:48.804762   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:48.853757   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:48.853796   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:48.975214   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:48.975256   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:49.060053   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:49.060085   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:49.060106   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:49.109489   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:49.109542   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:49.166484   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:49.166547   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:49.277669   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:49.277719   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:47.892999   41240 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.720893614s)
	I1217 20:23:47.893078   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:48.308106   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:48.383941   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:48.504752   41240 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:23:48.504848   41240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:23:49.005152   41240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:23:49.505105   41240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:23:49.560108   41240 api_server.go:72] duration metric: took 1.055385553s to wait for apiserver process to appear ...
	I1217 20:23:49.560140   41240 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:23:49.560163   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:49.560753   41240 api_server.go:269] stopped: https://192.168.61.108:8443/healthz: Get "https://192.168.61.108:8443/healthz": dial tcp 192.168.61.108:8443: connect: connection refused
	I1217 20:23:50.060310   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:47.808168   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:47.809041   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:47.809107   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:47.809610   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:47.809650   41406 retry.go:31] will retry after 2.388899192s: waiting for domain to come up
	I1217 20:23:50.199766   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:50.200512   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:50.200560   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:50.200957   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:50.200986   41406 retry.go:31] will retry after 3.596030173s: waiting for domain to come up
	I1217 20:23:52.727895   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:23:52.727921   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:23:52.727934   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:52.778928   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:23:52.779029   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:23:53.060285   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:53.067054   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:23:53.067081   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:23:53.560619   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:53.565072   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:23:53.565101   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:23:54.060604   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:54.079118   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:23:54.079159   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:23:54.560855   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:54.568255   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 200:
	ok
	I1217 20:23:54.576011   41240 api_server.go:141] control plane version: v1.34.3
	I1217 20:23:54.576039   41240 api_server.go:131] duration metric: took 5.015893056s to wait for apiserver health ...
	I1217 20:23:54.576049   41240 cni.go:84] Creating CNI manager for ""
	I1217 20:23:54.576055   41240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 20:23:54.577405   41240 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 20:23:54.579202   41240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 20:23:54.594808   41240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 20:23:54.626028   41240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:23:54.635106   41240 system_pods.go:59] 6 kube-system pods found
	I1217 20:23:54.635144   41240 system_pods.go:61] "coredns-66bc5c9577-7grrd" [7659c433-1b61-45dd-a6ee-14007a1efcda] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:23:54.635154   41240 system_pods.go:61] "etcd-pause-722044" [46516f16-310e-4672-baba-2f07ada89233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:23:54.635164   41240 system_pods.go:61] "kube-apiserver-pause-722044" [90032952-2169-493c-bbf4-a1163465ed8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:23:54.635177   41240 system_pods.go:61] "kube-controller-manager-pause-722044" [a50d4a20-b5cf-4223-a26c-086d8e1e3c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:23:54.635194   41240 system_pods.go:61] "kube-proxy-snthq" [24049acb-98c2-425b-b662-917a0f36e924] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 20:23:54.635202   41240 system_pods.go:61] "kube-scheduler-pause-722044" [5fcf648b-a03c-4a43-85f6-4cec9e10d0b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:23:54.635215   41240 system_pods.go:74] duration metric: took 9.167052ms to wait for pod list to return data ...
	I1217 20:23:54.635226   41240 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:23:54.640424   41240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 20:23:54.640481   41240 node_conditions.go:123] node cpu capacity is 2
	I1217 20:23:54.640501   41240 node_conditions.go:105] duration metric: took 5.268699ms to run NodePressure ...
	I1217 20:23:54.640580   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:55.032093   41240 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1217 20:23:55.037193   41240 kubeadm.go:744] kubelet initialised
	I1217 20:23:55.037225   41240 kubeadm.go:745] duration metric: took 5.097811ms waiting for restarted kubelet to initialise ...
	I1217 20:23:55.037247   41240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:23:55.055261   41240 ops.go:34] apiserver oom_adj: -16
	I1217 20:23:55.055288   41240 kubeadm.go:602] duration metric: took 29.665913722s to restartPrimaryControlPlane
	I1217 20:23:55.055301   41240 kubeadm.go:403] duration metric: took 29.820299731s to StartCluster
	I1217 20:23:55.055323   41240 settings.go:142] acquiring lock: {Name:mke3c622f98fffe95e3e848232032c1bad05dc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:23:55.055414   41240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 20:23:55.056440   41240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/kubeconfig: {Name:mk319ed0207c46a4a2ae4d9b320056846508447c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:23:55.056705   41240 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.108 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:23:55.056815   41240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:23:55.057049   41240 config.go:182] Loaded profile config "pause-722044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:55.058621   41240 out.go:179] * Enabled addons: 
	I1217 20:23:55.058623   41240 out.go:179] * Verifying Kubernetes components...
	I1217 20:23:51.842753   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:51.843513   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:51.843607   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:51.843672   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:51.899729   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:51.899759   39298 cri.go:89] found id: ""
	I1217 20:23:51.899769   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:51.899863   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:51.906094   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:51.906166   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:51.962455   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:51.962481   39298 cri.go:89] found id: ""
	I1217 20:23:51.962492   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:51.962573   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:51.968281   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:51.968368   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:52.020810   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:52.020840   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:52.020847   39298 cri.go:89] found id: ""
	I1217 20:23:52.020857   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:52.020920   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.026760   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.031846   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:52.031905   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:52.085122   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:52.085147   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:52.085153   39298 cri.go:89] found id: ""
	I1217 20:23:52.085163   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:52.085229   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.091064   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.096407   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:52.096468   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:52.142038   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:52.142069   39298 cri.go:89] found id: ""
	I1217 20:23:52.142080   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:52.142142   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.146324   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:52.146404   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:52.192321   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:52.192349   39298 cri.go:89] found id: ""
	I1217 20:23:52.192358   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:52.192433   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.198132   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:52.198201   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:52.250395   39298 cri.go:89] found id: ""
	I1217 20:23:52.250430   39298 logs.go:282] 0 containers: []
	W1217 20:23:52.250443   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:52.250451   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:52.250522   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:52.293584   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:52.293618   39298 cri.go:89] found id: ""
	I1217 20:23:52.293631   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:52.293692   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.299120   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:52.299146   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:52.346571   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:52.346613   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:52.391290   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:52.391323   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:52.449891   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:52.449926   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:52.493666   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:52.493704   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:52.560755   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:52.560796   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:52.687614   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:52.687668   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:52.806780   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:52.806818   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:52.806835   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:52.863623   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:52.863667   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:52.904336   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:52.904366   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:53.005597   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:53.005637   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:53.057598   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:53.057640   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:53.396904   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:53.396938   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:53.415155   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:53.415186   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:55.059769   41240 addons.go:530] duration metric: took 2.961994ms for enable addons: enabled=[]
	I1217 20:23:55.059792   41240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:23:55.299732   41240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:23:55.320794   41240 node_ready.go:35] waiting up to 6m0s for node "pause-722044" to be "Ready" ...
	I1217 20:23:55.324200   41240 node_ready.go:49] node "pause-722044" is "Ready"
	I1217 20:23:55.324238   41240 node_ready.go:38] duration metric: took 3.378287ms for node "pause-722044" to be "Ready" ...
	I1217 20:23:55.324256   41240 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:23:55.324317   41240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:23:55.349627   41240 api_server.go:72] duration metric: took 292.888358ms to wait for apiserver process to appear ...
	I1217 20:23:55.349660   41240 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:23:55.349684   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:55.355183   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 200:
	ok
	I1217 20:23:55.356167   41240 api_server.go:141] control plane version: v1.34.3
	I1217 20:23:55.356192   41240 api_server.go:131] duration metric: took 6.524574ms to wait for apiserver health ...
	I1217 20:23:55.356203   41240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:23:55.359214   41240 system_pods.go:59] 6 kube-system pods found
	I1217 20:23:55.359266   41240 system_pods.go:61] "coredns-66bc5c9577-7grrd" [7659c433-1b61-45dd-a6ee-14007a1efcda] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:23:55.359287   41240 system_pods.go:61] "etcd-pause-722044" [46516f16-310e-4672-baba-2f07ada89233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:23:55.359301   41240 system_pods.go:61] "kube-apiserver-pause-722044" [90032952-2169-493c-bbf4-a1163465ed8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:23:55.359312   41240 system_pods.go:61] "kube-controller-manager-pause-722044" [a50d4a20-b5cf-4223-a26c-086d8e1e3c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:23:55.359321   41240 system_pods.go:61] "kube-proxy-snthq" [24049acb-98c2-425b-b662-917a0f36e924] Running
	I1217 20:23:55.359331   41240 system_pods.go:61] "kube-scheduler-pause-722044" [5fcf648b-a03c-4a43-85f6-4cec9e10d0b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:23:55.359348   41240 system_pods.go:74] duration metric: took 3.130779ms to wait for pod list to return data ...
	I1217 20:23:55.359360   41240 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:23:55.361768   41240 default_sa.go:45] found service account: "default"
	I1217 20:23:55.361784   41240 default_sa.go:55] duration metric: took 2.41855ms for default service account to be created ...
	I1217 20:23:55.361791   41240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:23:55.365589   41240 system_pods.go:86] 6 kube-system pods found
	I1217 20:23:55.365625   41240 system_pods.go:89] "coredns-66bc5c9577-7grrd" [7659c433-1b61-45dd-a6ee-14007a1efcda] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:23:55.365636   41240 system_pods.go:89] "etcd-pause-722044" [46516f16-310e-4672-baba-2f07ada89233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:23:55.365645   41240 system_pods.go:89] "kube-apiserver-pause-722044" [90032952-2169-493c-bbf4-a1163465ed8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:23:55.365655   41240 system_pods.go:89] "kube-controller-manager-pause-722044" [a50d4a20-b5cf-4223-a26c-086d8e1e3c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:23:55.365663   41240 system_pods.go:89] "kube-proxy-snthq" [24049acb-98c2-425b-b662-917a0f36e924] Running
	I1217 20:23:55.365674   41240 system_pods.go:89] "kube-scheduler-pause-722044" [5fcf648b-a03c-4a43-85f6-4cec9e10d0b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:23:55.365684   41240 system_pods.go:126] duration metric: took 3.886111ms to wait for k8s-apps to be running ...
	I1217 20:23:55.365694   41240 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:23:55.365746   41240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:23:55.389935   41240 system_svc.go:56] duration metric: took 24.232603ms WaitForService to wait for kubelet
	I1217 20:23:55.389974   41240 kubeadm.go:587] duration metric: took 333.240142ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:23:55.390001   41240 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:23:55.394850   41240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 20:23:55.394881   41240 node_conditions.go:123] node cpu capacity is 2
	I1217 20:23:55.394897   41240 node_conditions.go:105] duration metric: took 4.890051ms to run NodePressure ...
	I1217 20:23:55.394915   41240 start.go:242] waiting for startup goroutines ...
	I1217 20:23:55.394925   41240 start.go:247] waiting for cluster config update ...
	I1217 20:23:55.394940   41240 start.go:256] writing updated cluster config ...
	I1217 20:23:55.395199   41240 ssh_runner.go:195] Run: rm -f paused
	I1217 20:23:55.402851   41240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:23:55.403869   41240 kapi.go:59] client config for pause-722044: &rest.Config{Host:"https://192.168.61.108:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/client.key", CAFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:23:55.408694   41240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7grrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:53.798634   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:53.799460   41406 main.go:143] libmachine: domain kindnet-698465 has current primary IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:53.799483   41406 main.go:143] libmachine: found domain IP: 192.168.50.49
	I1217 20:23:53.799493   41406 main.go:143] libmachine: reserving static IP address...
	I1217 20:23:53.799991   41406 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kindnet-698465", mac: "52:54:00:8d:a5:2d", ip: "192.168.50.49"} in network mk-kindnet-698465
	I1217 20:23:54.017572   41406 main.go:143] libmachine: reserved static IP address 192.168.50.49 for domain kindnet-698465
	I1217 20:23:54.017607   41406 main.go:143] libmachine: waiting for SSH...
	I1217 20:23:54.017616   41406 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 20:23:54.022201   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.023000   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.023037   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.023484   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.023825   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.023843   41406 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 20:23:54.144827   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:23:54.145262   41406 main.go:143] libmachine: domain creation complete
	I1217 20:23:54.147133   41406 machine.go:94] provisionDockerMachine start ...
	I1217 20:23:54.150413   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.151038   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.151074   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.151327   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.151639   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.151656   41406 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:23:54.278159   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 20:23:54.278200   41406 buildroot.go:166] provisioning hostname "kindnet-698465"
	I1217 20:23:54.281927   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.282606   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.282642   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.282923   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.283170   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.283184   41406 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-698465 && echo "kindnet-698465" | sudo tee /etc/hostname
	I1217 20:23:54.484261   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-698465
	
	I1217 20:23:54.487891   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.488323   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.488354   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.488568   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.488824   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.488840   41406 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-698465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-698465/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-698465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:23:54.625374   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:23:54.625406   41406 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22186-3611/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-3611/.minikube}
	I1217 20:23:54.625429   41406 buildroot.go:174] setting up certificates
	I1217 20:23:54.625439   41406 provision.go:84] configureAuth start
	I1217 20:23:54.629036   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.629536   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.629568   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.632680   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.633146   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.633178   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.633357   41406 provision.go:143] copyHostCerts
	I1217 20:23:54.633439   41406 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem, removing ...
	I1217 20:23:54.633459   41406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem
	I1217 20:23:54.633558   41406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem (1082 bytes)
	I1217 20:23:54.633686   41406 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem, removing ...
	I1217 20:23:54.633698   41406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem
	I1217 20:23:54.633772   41406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem (1123 bytes)
	I1217 20:23:54.633881   41406 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem, removing ...
	I1217 20:23:54.633893   41406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem
	I1217 20:23:54.633932   41406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem (1679 bytes)
	I1217 20:23:54.634009   41406 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem org=jenkins.kindnet-698465 san=[127.0.0.1 192.168.50.49 kindnet-698465 localhost minikube]
	I1217 20:23:54.723639   41406 provision.go:177] copyRemoteCerts
	I1217 20:23:54.723701   41406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:23:54.726683   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.727102   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.727127   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.727295   41406 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa Username:docker}
	I1217 20:23:54.815884   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:23:54.852945   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:23:54.892108   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1217 20:23:54.942459   41406 provision.go:87] duration metric: took 317.007707ms to configureAuth
	I1217 20:23:54.942494   41406 buildroot.go:189] setting minikube options for container-runtime
	I1217 20:23:54.942705   41406 config.go:182] Loaded profile config "kindnet-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:54.947831   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.949142   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.949217   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.949694   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.950102   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.950162   41406 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:23:55.501992   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:23:55.502021   41406 machine.go:97] duration metric: took 1.354869926s to provisionDockerMachine
	I1217 20:23:55.502034   41406 client.go:176] duration metric: took 17.70696905s to LocalClient.Create
	I1217 20:23:55.502054   41406 start.go:167] duration metric: took 17.707026452s to libmachine.API.Create "kindnet-698465"
	I1217 20:23:55.502062   41406 start.go:293] postStartSetup for "kindnet-698465" (driver="kvm2")
	I1217 20:23:55.502074   41406 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:23:55.502149   41406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:23:55.505622   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.506133   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.506168   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.506383   41406 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa Username:docker}
	I1217 20:23:55.603893   41406 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:23:55.609641   41406 info.go:137] Remote host: Buildroot 2025.02
	I1217 20:23:55.609678   41406 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/addons for local assets ...
	I1217 20:23:55.609771   41406 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/files for local assets ...
	I1217 20:23:55.609875   41406 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem -> 75312.pem in /etc/ssl/certs
	I1217 20:23:55.609997   41406 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:23:55.625410   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem --> /etc/ssl/certs/75312.pem (1708 bytes)
	I1217 20:23:55.660468   41406 start.go:296] duration metric: took 158.389033ms for postStartSetup
	I1217 20:23:55.663957   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.664328   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.664361   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.664615   41406 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/config.json ...
	I1217 20:23:55.664821   41406 start.go:128] duration metric: took 17.871178068s to createHost
	I1217 20:23:55.666907   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.667256   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.667277   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.667420   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:55.667642   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:55.667655   41406 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 20:23:55.780042   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: 1766003035.732721025
	
	I1217 20:23:55.780082   41406 fix.go:216] guest clock: 1766003035.732721025
	I1217 20:23:55.780093   41406 fix.go:229] Guest: 2025-12-17 20:23:55.732721025 +0000 UTC Remote: 2025-12-17 20:23:55.664834065 +0000 UTC m=+17.975072934 (delta=67.88696ms)
	I1217 20:23:55.780117   41406 fix.go:200] guest clock delta is within tolerance: 67.88696ms
	I1217 20:23:55.780125   41406 start.go:83] releasing machines lock for "kindnet-698465", held for 17.986598589s
	I1217 20:23:55.783237   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.783610   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.783635   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.784163   41406 ssh_runner.go:195] Run: cat /version.json
	I1217 20:23:55.784189   41406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:23:55.787185   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.787400   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.787662   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.787703   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.787846   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.787880   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.787964   41406 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa Username:docker}
	I1217 20:23:55.788172   41406 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa Username:docker}
	I1217 20:23:55.895316   41406 ssh_runner.go:195] Run: systemctl --version
	I1217 20:23:55.902310   41406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:23:56.064345   41406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:23:56.073259   41406 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:23:56.073354   41406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:23:56.097194   41406 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:23:56.097216   41406 start.go:496] detecting cgroup driver to use...
	I1217 20:23:56.097275   41406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:23:56.119046   41406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:23:56.139583   41406 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:23:56.139659   41406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:23:56.165683   41406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:23:56.192910   41406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:23:56.359925   41406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:23:56.598130   41406 docker.go:234] disabling docker service ...
	I1217 20:23:56.598212   41406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:23:56.623351   41406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:23:56.644291   41406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:23:56.830226   41406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:23:56.991804   41406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:23:57.010078   41406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:23:57.033745   41406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:23:57.033818   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.049968   41406 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:23:57.050050   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.063102   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.076124   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.090003   41406 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:23:57.106032   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.121880   41406 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.149691   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.163787   41406 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:23:57.176637   41406 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 20:23:57.176705   41406 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 20:23:57.207445   41406 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:23:57.222644   41406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:23:57.375866   41406 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:23:57.522090   41406 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:23:57.522149   41406 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:23:57.527849   41406 start.go:564] Will wait 60s for crictl version
	I1217 20:23:57.527915   41406 ssh_runner.go:195] Run: which crictl
	I1217 20:23:57.532233   41406 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 20:23:57.569507   41406 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 20:23:57.569619   41406 ssh_runner.go:195] Run: crio --version
	I1217 20:23:57.598833   41406 ssh_runner.go:195] Run: crio --version
	I1217 20:23:57.633441   41406 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 20:23:57.636998   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:57.637415   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:57.637436   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:57.637642   41406 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1217 20:23:57.642323   41406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:23:57.658925   41406 kubeadm.go:884] updating cluster {Name:kindnet-698465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.3 ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:23:57.659059   41406 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:23:57.659126   41406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:23:57.692027   41406 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1217 20:23:57.692100   41406 ssh_runner.go:195] Run: which lz4
	I1217 20:23:57.696581   41406 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 20:23:57.701516   41406 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 20:23:57.701562   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1217 20:23:55.961508   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:55.962184   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:55.962239   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:55.962313   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:56.002682   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:56.002703   39298 cri.go:89] found id: ""
	I1217 20:23:56.002711   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:56.002764   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.006995   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:56.007063   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:56.046438   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:56.046464   39298 cri.go:89] found id: ""
	I1217 20:23:56.046475   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:56.046536   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.051233   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:56.051294   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:56.091451   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:56.091478   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:56.091485   39298 cri.go:89] found id: ""
	I1217 20:23:56.091495   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:56.091582   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.096663   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.102451   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:56.102512   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:56.150921   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:56.150946   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:56.150952   39298 cri.go:89] found id: ""
	I1217 20:23:56.150962   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:56.151016   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.155354   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.160882   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:56.160949   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:56.210950   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:56.210978   39298 cri.go:89] found id: ""
	I1217 20:23:56.210988   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:56.211051   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.215843   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:56.215930   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:56.258049   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:56.258078   39298 cri.go:89] found id: ""
	I1217 20:23:56.258087   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:56.258151   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.263428   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:56.263518   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:56.307346   39298 cri.go:89] found id: ""
	I1217 20:23:56.307386   39298 logs.go:282] 0 containers: []
	W1217 20:23:56.307399   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:56.307406   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:56.307474   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:56.347335   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:56.347382   39298 cri.go:89] found id: ""
	I1217 20:23:56.347392   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:56.347458   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.353378   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:56.353406   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:56.396043   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:56.396077   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:56.442409   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:56.442447   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:56.485726   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:56.485756   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:56.527320   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:56.527358   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:56.573136   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:56.573166   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:56.613510   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:56.613556   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:56.701959   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:56.702008   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:56.744706   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:56.744741   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:57.087777   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:57.087816   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:57.140435   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:57.140467   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:57.264360   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:57.264400   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:57.282892   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:57.282926   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:57.353746   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:57.353776   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:57.353790   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:59.904077   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:59.904855   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:59.904919   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:59.904968   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:59.953562   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:59.953590   39298 cri.go:89] found id: ""
	I1217 20:23:59.953601   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:59.953667   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:59.958796   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:59.958856   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:24:00.017438   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:24:00.017460   39298 cri.go:89] found id: ""
	I1217 20:24:00.017467   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:24:00.017519   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.022089   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:24:00.022166   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:24:00.071894   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:24:00.071924   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:24:00.071931   39298 cri.go:89] found id: ""
	I1217 20:24:00.071941   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:24:00.072011   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.079410   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.086456   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:24:00.086545   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:24:00.131603   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:24:00.131630   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:24:00.131637   39298 cri.go:89] found id: ""
	I1217 20:24:00.131645   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:24:00.131710   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.137862   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.142438   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:24:00.142509   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:24:00.187259   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:24:00.187287   39298 cri.go:89] found id: ""
	I1217 20:24:00.187298   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:24:00.187364   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.193451   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:24:00.193547   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:24:00.247169   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:24:00.247199   39298 cri.go:89] found id: ""
	I1217 20:24:00.247209   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:24:00.247270   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.253230   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:24:00.253321   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:24:00.294038   39298 cri.go:89] found id: ""
	I1217 20:24:00.294065   39298 logs.go:282] 0 containers: []
	W1217 20:24:00.294072   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:24:00.294079   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:24:00.294129   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:24:00.336759   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:24:00.336789   39298 cri.go:89] found id: ""
	I1217 20:24:00.336800   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:24:00.336882   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.343312   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:24:00.343375   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:24:00.465960   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:24:00.466008   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:24:00.520650   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:24:00.520678   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:24:00.597412   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:24:00.597440   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:24:00.597457   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:24:00.647034   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:24:00.647067   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:24:00.693994   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:24:00.694028   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:24:00.756516   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:24:00.756568   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:24:00.838244   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:24:00.838277   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	W1217 20:23:57.418926   41240 pod_ready.go:104] pod "coredns-66bc5c9577-7grrd" is not "Ready", error: <nil>
	I1217 20:23:57.918031   41240 pod_ready.go:94] pod "coredns-66bc5c9577-7grrd" is "Ready"
	I1217 20:23:57.918067   41240 pod_ready.go:86] duration metric: took 2.509345108s for pod "coredns-66bc5c9577-7grrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:57.921010   41240 pod_ready.go:83] waiting for pod "etcd-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 20:23:59.930426   41240 pod_ready.go:104] pod "etcd-pause-722044" is not "Ready", error: <nil>
	I1217 20:23:59.075028   41406 crio.go:462] duration metric: took 1.378503564s to copy over tarball
	I1217 20:23:59.075105   41406 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 20:24:00.801365   41406 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.72622919s)
	I1217 20:24:00.801397   41406 crio.go:469] duration metric: took 1.726339596s to extract the tarball
	I1217 20:24:00.801405   41406 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 20:24:00.851178   41406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:24:00.894876   41406 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:24:00.894914   41406 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:24:00.894925   41406 kubeadm.go:935] updating node { 192.168.50.49 8443 v1.34.3 crio true true} ...
	I1217 20:24:00.895043   41406 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-698465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1217 20:24:00.895135   41406 ssh_runner.go:195] Run: crio config
	I1217 20:24:00.953408   41406 cni.go:84] Creating CNI manager for "kindnet"
	I1217 20:24:00.953447   41406 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:24:00.953475   41406 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.49 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-698465 NodeName:kindnet-698465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:24:00.953660   41406 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-698465"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.49"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.49"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:24:00.953739   41406 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:24:00.971394   41406 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:24:00.971471   41406 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:24:00.989398   41406 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1217 20:24:01.018287   41406 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:24:01.044675   41406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1217 20:24:01.067077   41406 ssh_runner.go:195] Run: grep 192.168.50.49	control-plane.minikube.internal$ /etc/hosts
	I1217 20:24:01.071796   41406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:24:01.086400   41406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:24:01.231760   41406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:24:01.272729   41406 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465 for IP: 192.168.50.49
	I1217 20:24:01.272764   41406 certs.go:195] generating shared ca certs ...
	I1217 20:24:01.272781   41406 certs.go:227] acquiring lock for ca certs: {Name:mka9d751f3e3cbcb654d1f1d24f2b10b27bc58a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.272948   41406 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key
	I1217 20:24:01.273001   41406 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key
	I1217 20:24:01.273015   41406 certs.go:257] generating profile certs ...
	I1217 20:24:01.273081   41406 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.key
	I1217 20:24:01.273113   41406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt with IP's: []
	I1217 20:24:01.382323   41406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt ...
	I1217 20:24:01.382354   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: {Name:mk40e4b55da943b02e2b580c004ca615e5767ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.382520   41406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.key ...
	I1217 20:24:01.382543   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.key: {Name:mk017177724a03f6f4e4fa3a06dd7000325479c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.382634   41406 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key.06ebfaef
	I1217 20:24:01.382649   41406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt.06ebfaef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.49]
	I1217 20:24:01.449287   41406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt.06ebfaef ...
	I1217 20:24:01.449313   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt.06ebfaef: {Name:mk47a3c15ce779e642f993485cba2f2f1b770ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.449522   41406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key.06ebfaef ...
	I1217 20:24:01.449570   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key.06ebfaef: {Name:mk804d17be1e550af07ee0c34197db572f23c394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.449713   41406 certs.go:382] copying /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt.06ebfaef -> /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt
	I1217 20:24:01.449845   41406 certs.go:386] copying /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key.06ebfaef -> /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key
	I1217 20:24:01.450015   41406 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.key
	I1217 20:24:01.450045   41406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.crt with IP's: []
	I1217 20:24:01.479857   41406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.crt ...
	I1217 20:24:01.479890   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.crt: {Name:mka17d60ef037f9ca717fce55913794601abebf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.480076   41406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.key ...
	I1217 20:24:01.480092   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.key: {Name:mk522bc70fda4b101cdce9cf05149327853db3ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.480306   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531.pem (1338 bytes)
	W1217 20:24:01.480356   41406 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531_empty.pem, impossibly tiny 0 bytes
	I1217 20:24:01.480366   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:24:01.480393   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:24:01.480415   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:24:01.480450   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem (1679 bytes)
	I1217 20:24:01.480490   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem (1708 bytes)
	I1217 20:24:01.481042   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:24:01.520181   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:24:01.560909   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:24:01.596064   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:24:01.629401   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 20:24:01.660208   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:24:01.695802   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:24:01.729764   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:24:01.765740   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531.pem --> /usr/share/ca-certificates/7531.pem (1338 bytes)
	I1217 20:24:01.798192   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem --> /usr/share/ca-certificates/75312.pem (1708 bytes)
	I1217 20:24:01.828349   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:24:01.859509   41406 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:24:01.881654   41406 ssh_runner.go:195] Run: openssl version
	I1217 20:24:01.888323   41406 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:24:01.901980   41406 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:24:01.915302   41406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:24:01.921126   41406 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:24:01.921181   41406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:24:01.932239   41406 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:24:01.946269   41406 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:24:01.962070   41406 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7531.pem
	I1217 20:24:01.979232   41406 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7531.pem /etc/ssl/certs/7531.pem
	I1217 20:24:01.994761   41406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7531.pem
	I1217 20:24:02.004561   41406 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/7531.pem
	I1217 20:24:02.004637   41406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7531.pem
	I1217 20:24:02.017106   41406 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:24:02.036258   41406 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7531.pem /etc/ssl/certs/51391683.0
	I1217 20:24:02.050135   41406 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75312.pem
	I1217 20:24:02.063199   41406 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75312.pem /etc/ssl/certs/75312.pem
	I1217 20:24:02.075825   41406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75312.pem
	I1217 20:24:02.081883   41406 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/75312.pem
	I1217 20:24:02.081959   41406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75312.pem
	I1217 20:24:02.089916   41406 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:24:02.104354   41406 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/75312.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:24:02.116469   41406 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:24:02.121460   41406 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:24:02.121523   41406 kubeadm.go:401] StartCluster: {Name:kindnet-698465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:24:02.121623   41406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:24:02.121701   41406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:24:02.158952   41406 cri.go:89] found id: ""
	I1217 20:24:02.159024   41406 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:24:02.174479   41406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:24:02.187264   41406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:24:02.200010   41406 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:24:02.200026   41406 kubeadm.go:158] found existing configuration files:
	
	I1217 20:24:02.200082   41406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:24:02.211352   41406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:24:02.211410   41406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:24:02.224324   41406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:24:02.236206   41406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:24:02.236264   41406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:24:02.250140   41406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:24:02.263210   41406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:24:02.263295   41406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:24:02.275894   41406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:24:02.287687   41406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:24:02.287758   41406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:24:02.300737   41406 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 20:24:02.354484   41406 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:24:02.354594   41406 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:24:02.470097   41406 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:24:02.470211   41406 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:24:02.470363   41406 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:24:02.484010   41406 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:24:00.892369   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:24:00.892401   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:24:00.956742   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:24:00.956770   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:24:01.337094   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:24:01.337127   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:24:01.446908   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:24:01.446958   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:24:01.463439   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:24:01.463470   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:24:01.510072   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:24:01.510104   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:24:04.073445   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	W1217 20:24:02.427816   41240 pod_ready.go:104] pod "etcd-pause-722044" is not "Ready", error: <nil>
	W1217 20:24:04.428092   41240 pod_ready.go:104] pod "etcd-pause-722044" is not "Ready", error: <nil>
	I1217 20:24:06.426328   41240 pod_ready.go:94] pod "etcd-pause-722044" is "Ready"
	I1217 20:24:06.426363   41240 pod_ready.go:86] duration metric: took 8.505323532s for pod "etcd-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.429134   41240 pod_ready.go:83] waiting for pod "kube-apiserver-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.433673   41240 pod_ready.go:94] pod "kube-apiserver-pause-722044" is "Ready"
	I1217 20:24:06.433701   41240 pod_ready.go:86] duration metric: took 4.547925ms for pod "kube-apiserver-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.435771   41240 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.440685   41240 pod_ready.go:94] pod "kube-controller-manager-pause-722044" is "Ready"
	I1217 20:24:06.440712   41240 pod_ready.go:86] duration metric: took 4.916476ms for pod "kube-controller-manager-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.443472   41240 pod_ready.go:83] waiting for pod "kube-proxy-snthq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.625267   41240 pod_ready.go:94] pod "kube-proxy-snthq" is "Ready"
	I1217 20:24:06.625293   41240 pod_ready.go:86] duration metric: took 181.802269ms for pod "kube-proxy-snthq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.825713   41240 pod_ready.go:83] waiting for pod "kube-scheduler-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:07.226178   41240 pod_ready.go:94] pod "kube-scheduler-pause-722044" is "Ready"
	I1217 20:24:07.226203   41240 pod_ready.go:86] duration metric: took 400.45979ms for pod "kube-scheduler-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:07.226213   41240 pod_ready.go:40] duration metric: took 11.823328299s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:24:07.279962   41240 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:24:07.281793   41240 out.go:179] * Done! kubectl is now configured to use "pause-722044" cluster and "default" namespace by default
	I1217 20:24:02.757947   41406 out.go:252]   - Generating certificates and keys ...
	I1217 20:24:02.758105   41406 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:24:02.758179   41406 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:24:02.758236   41406 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:24:03.392356   41406 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:24:03.865938   41406 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:24:05.002288   41406 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:24:05.303955   41406 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:24:05.304103   41406 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-698465 localhost] and IPs [192.168.50.49 127.0.0.1 ::1]
	I1217 20:24:05.727414   41406 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:24:05.727593   41406 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-698465 localhost] and IPs [192.168.50.49 127.0.0.1 ::1]
	I1217 20:24:05.986325   41406 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:24:06.032431   41406 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:24:06.268771   41406 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:24:06.269101   41406 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:24:06.376648   41406 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:24:06.781743   41406 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:24:06.908193   41406 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:24:07.155730   41406 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:24:07.280125   41406 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:24:07.280884   41406 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:24:07.283968   41406 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.943511135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766003047943488319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c55270a-ced7-4076-8ced-fa4186fe669a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.944755434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f18a067-bcbe-456c-b9f2-91545d822aac name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.944890001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f18a067-bcbe-456c-b9f2-91545d822aac name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.945142224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766003033780682122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a7762280f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766003033796784692,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766003028951678240,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b
0c,State:CONTAINER_RUNNING,CreatedAt:1766003028960946916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766003028958737572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766003028929046639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a776228
0f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766003003475507590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766003004534112897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766003003541941060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766003003409790971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash:
5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766003003376876821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766003003277513316,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f18a067-bcbe-456c-b9f2-91545d822aac name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.984334542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c21d4d7-2e7e-408e-aee1-9ef9146f02ff name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.984563907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c21d4d7-2e7e-408e-aee1-9ef9146f02ff name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.986327339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d35aaa7-3a8d-4cfa-bfe6-6f0d05ec1e1b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.986669971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766003047986651298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d35aaa7-3a8d-4cfa-bfe6-6f0d05ec1e1b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.988150822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea332852-76f9-48ce-a82e-f59edbb51f01 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.988375938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea332852-76f9-48ce-a82e-f59edbb51f01 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:07 pause-722044 crio[2801]: time="2025-12-17 20:24:07.988685238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766003033780682122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a7762280f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766003033796784692,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766003028951678240,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b
0c,State:CONTAINER_RUNNING,CreatedAt:1766003028960946916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766003028958737572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766003028929046639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a776228
0f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766003003475507590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766003004534112897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766003003541941060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766003003409790971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash:
5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766003003376876821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766003003277513316,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea332852-76f9-48ce-a82e-f59edbb51f01 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.035298917Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47638a5b-4c3b-4d2b-8fd8-57000d4ee09c name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.035411676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47638a5b-4c3b-4d2b-8fd8-57000d4ee09c name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.036727941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f15b10a-27fe-4706-a4e6-3bc155a1f828 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.037378868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766003048037352217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f15b10a-27fe-4706-a4e6-3bc155a1f828 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.038440575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c859749b-c214-4bb1-8109-c900e7670c84 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.038494479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c859749b-c214-4bb1-8109-c900e7670c84 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.038717293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766003033780682122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a7762280f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766003033796784692,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766003028951678240,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b
0c,State:CONTAINER_RUNNING,CreatedAt:1766003028960946916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766003028958737572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766003028929046639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a776228
0f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766003003475507590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766003004534112897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766003003541941060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766003003409790971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash:
5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766003003376876821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766003003277513316,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c859749b-c214-4bb1-8109-c900e7670c84 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.079728176Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca246274-20fe-4f8b-9501-7ce14b50d5a4 name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.079922930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca246274-20fe-4f8b-9501-7ce14b50d5a4 name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.081513227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fa4148d-3b1f-4a5e-b58d-cebb9c08924b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.081885156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766003048081865044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fa4148d-3b1f-4a5e-b58d-cebb9c08924b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.082725861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a85b5b7e-2580-4ebb-83d5-4352542dd995 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.082802448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a85b5b7e-2580-4ebb-83d5-4352542dd995 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:08 pause-722044 crio[2801]: time="2025-12-17 20:24:08.083055864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766003033780682122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a7762280f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766003033796784692,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766003028951678240,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b
0c,State:CONTAINER_RUNNING,CreatedAt:1766003028960946916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766003028958737572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766003028929046639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a776228
0f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766003003475507590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766003004534112897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766003003541941060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766003003409790971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash:
5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766003003376876821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766003003277513316,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a85b5b7e-2580-4ebb-83d5-4352542dd995 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	c04f950f167c7       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   14 seconds ago      Running             kube-proxy                2                   b11ff73723458       kube-proxy-snthq                       kube-system
	569ce5bc0074a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   2                   a8aeaab42259b       coredns-66bc5c9577-7grrd               kube-system
	7faab952af369       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   19 seconds ago      Running             kube-apiserver            2                   f209a2779d795       kube-apiserver-pause-722044            kube-system
	e2c6ecbb072a4       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   19 seconds ago      Running             kube-controller-manager   2                   97505412ecef1       kube-controller-manager-pause-722044   kube-system
	ff6cf0abbf3f6       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   19 seconds ago      Running             kube-scheduler            2                   cb06495b9771e       kube-scheduler-pause-722044            kube-system
	77f4487b3cdf8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago      Running             etcd                      2                   d16b6434c3871       etcd-pause-722044                      kube-system
	3148e9a334330       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   43 seconds ago      Exited              coredns                   1                   a8aeaab42259b       coredns-66bc5c9577-7grrd               kube-system
	00a2e25f105f3       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   44 seconds ago      Exited              kube-apiserver            1                   f209a2779d795       kube-apiserver-pause-722044            kube-system
	288a092a80120       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   44 seconds ago      Exited              kube-proxy                1                   b11ff73723458       kube-proxy-snthq                       kube-system
	1dbeb3a9804d1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   44 seconds ago      Exited              etcd                      1                   d16b6434c3871       etcd-pause-722044                      kube-system
	96157b431de6b       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   44 seconds ago      Exited              kube-controller-manager   1                   97505412ecef1       kube-controller-manager-pause-722044   kube-system
	eb92e68f0672b       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   44 seconds ago      Exited              kube-scheduler            1                   cb06495b9771e       kube-scheduler-pause-722044            kube-system
	
	
	==> coredns [3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:55174 - 23645 "HINFO IN 466270137555793141.2631611240893111981. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027213015s
	
	
	==> coredns [569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48732 - 46731 "HINFO IN 729494661844392183.2324542142382827573. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026035134s
	
	
	==> describe nodes <==
	Name:               pause-722044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-722044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=pause-722044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_22_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:22:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-722044
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:24:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:23:52 +0000   Wed, 17 Dec 2025 20:22:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:23:52 +0000   Wed, 17 Dec 2025 20:22:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:23:52 +0000   Wed, 17 Dec 2025 20:22:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:23:52 +0000   Wed, 17 Dec 2025 20:22:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.108
	  Hostname:    pause-722044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 80d5baeedfdb460f88dada4fa0f98d05
	  System UUID:                80d5baee-dfdb-460f-88da-da4fa0f98d05
	  Boot ID:                    0a5a2d07-0736-4b0f-aade-295cd6926e33
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7grrd                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     71s
	  kube-system                 etcd-pause-722044                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         76s
	  kube-system                 kube-apiserver-pause-722044             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-pause-722044    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-snthq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-scheduler-pause-722044             100m (5%)     0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 69s                kube-proxy       
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 40s                kube-proxy       
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node pause-722044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node pause-722044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet          Node pause-722044 status is now: NodeHasSufficientPID
	  Normal  NodeReady                75s                kubelet          Node pause-722044 status is now: NodeReady
	  Normal  RegisteredNode           72s                node-controller  Node pause-722044 event: Registered Node pause-722044 in Controller
	  Normal  RegisteredNode           37s                node-controller  Node pause-722044 event: Registered Node pause-722044 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node pause-722044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node pause-722044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node pause-722044 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-722044 event: Registered Node pause-722044 in Controller
	
	
	==> dmesg <==
	[Dec17 20:22] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001631] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005825] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.184291] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083334] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.117723] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.182491] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.259984] kauditd_printk_skb: 18 callbacks suppressed
	[Dec17 20:23] kauditd_printk_skb: 219 callbacks suppressed
	[  +0.105331] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.623003] kauditd_printk_skb: 252 callbacks suppressed
	[  +7.263217] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.205967] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.617479] kauditd_printk_skb: 83 callbacks suppressed
	
	
	==> etcd [1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506] <==
	{"level":"warn","ts":"2025-12-17T20:23:27.052902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.062522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.073304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.082946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.090549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.103785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.163623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43892","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T20:23:45.512949Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T20:23:45.513086Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-722044","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.108:2380"],"advertise-client-urls":["https://192.168.61.108:2379"]}
	{"level":"error","ts":"2025-12-17T20:23:45.513264Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T20:23:45.515610Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T20:23:45.517151Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T20:23:45.517288Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.108:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T20:23:45.517447Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.108:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T20:23:45.517444Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T20:23:45.517462Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.108:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T20:23:45.517465Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T20:23:45.517477Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T20:23:45.517481Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7f161a451982983d","current-leader-member-id":"7f161a451982983d"}
	{"level":"info","ts":"2025-12-17T20:23:45.517631Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T20:23:45.517649Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-17T20:23:45.521467Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.61.108:2380"}
	{"level":"error","ts":"2025-12-17T20:23:45.521561Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.108:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T20:23:45.521608Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.61.108:2380"}
	{"level":"info","ts":"2025-12-17T20:23:45.521620Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-722044","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.108:2380"],"advertise-client-urls":["https://192.168.61.108:2379"]}
	
	
	==> etcd [77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c] <==
	{"level":"warn","ts":"2025-12-17T20:23:51.668271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.679352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.697066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.715028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.735834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.756357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.771226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.791836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.909748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T20:24:02.979645Z","caller":"traceutil/trace.go:172","msg":"trace[1810309793] linearizableReadLoop","detail":"{readStateIndex:599; appliedIndex:599; }","duration":"243.12447ms","start":"2025-12-17T20:24:02.736503Z","end":"2025-12-17T20:24:02.979627Z","steps":["trace[1810309793] 'read index received'  (duration: 243.120417ms)","trace[1810309793] 'applied index is now lower than readState.Index'  (duration: 3.525µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:02.979801Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"243.291024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T20:24:02.979853Z","caller":"traceutil/trace.go:172","msg":"trace[1907126338] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:554; }","duration":"243.367522ms","start":"2025-12-17T20:24:02.736478Z","end":"2025-12-17T20:24:02.979846Z","steps":["trace[1907126338] 'agreement among raft nodes before linearized reading'  (duration: 243.26366ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:24:02.981027Z","caller":"traceutil/trace.go:172","msg":"trace[565004326] transaction","detail":"{read_only:false; response_revision:555; number_of_response:1; }","duration":"282.183004ms","start":"2025-12-17T20:24:02.698828Z","end":"2025-12-17T20:24:02.981011Z","steps":["trace[565004326] 'process raft request'  (duration: 281.181971ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:24:03.508474Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.25823ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10970094889143209044 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" mod_revision:555 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" value_size:4777 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T20:24:03.508565Z","caller":"traceutil/trace.go:172","msg":"trace[1076540448] transaction","detail":"{read_only:false; response_revision:556; number_of_response:1; }","duration":"515.232148ms","start":"2025-12-17T20:24:02.993323Z","end":"2025-12-17T20:24:03.508556Z","steps":["trace[1076540448] 'process raft request'  (duration: 378.497506ms)","trace[1076540448] 'compare'  (duration: 136.028147ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:03.508609Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:24:02.993304Z","time spent":"515.284012ms","remote":"127.0.0.1:56558","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4839,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" mod_revision:555 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" value_size:4777 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" > >"}
	{"level":"info","ts":"2025-12-17T20:24:03.916235Z","caller":"traceutil/trace.go:172","msg":"trace[924582882] linearizableReadLoop","detail":"{readStateIndex:601; appliedIndex:601; }","duration":"496.513762ms","start":"2025-12-17T20:24:03.419637Z","end":"2025-12-17T20:24:03.916150Z","steps":["trace[924582882] 'read index received'  (duration: 496.485734ms)","trace[924582882] 'applied index is now lower than readState.Index'  (duration: 27.229µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:04.028903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"609.25853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-722044\" limit:1 ","response":"range_response_count:1 size:6083"}
	{"level":"info","ts":"2025-12-17T20:24:04.029121Z","caller":"traceutil/trace.go:172","msg":"trace[1046993475] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-722044; range_end:; response_count:1; response_revision:556; }","duration":"609.472948ms","start":"2025-12-17T20:24:03.419633Z","end":"2025-12-17T20:24:04.029106Z","steps":["trace[1046993475] 'agreement among raft nodes before linearized reading'  (duration: 496.680005ms)","trace[1046993475] 'range keys from in-memory index tree'  (duration: 112.447609ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:04.028906Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.736085ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10970094889143209045 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" mod_revision:488 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T20:24:04.029538Z","caller":"traceutil/trace.go:172","msg":"trace[745654762] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"826.866533ms","start":"2025-12-17T20:24:03.202661Z","end":"2025-12-17T20:24:04.029528Z","steps":["trace[745654762] 'process raft request'  (duration: 826.800949ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:24:04.029613Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:24:03.202644Z","time spent":"826.923329ms","remote":"127.0.0.1:56706","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":536,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-722044\" mod_revision:487 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-722044\" value_size:483 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-722044\" > >"}
	{"level":"info","ts":"2025-12-17T20:24:04.029764Z","caller":"traceutil/trace.go:172","msg":"trace[551268511] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"959.454103ms","start":"2025-12-17T20:24:03.070285Z","end":"2025-12-17T20:24:04.029739Z","steps":["trace[551268511] 'process raft request'  (duration: 845.829481ms)","trace[551268511] 'compare'  (duration: 112.649256ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:04.029895Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:24:03.419619Z","time spent":"609.534154ms","remote":"127.0.0.1:56558","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":6105,"request content":"key:\"/registry/pods/kube-system/etcd-pause-722044\" limit:1 "}
	{"level":"warn","ts":"2025-12-17T20:24:04.029928Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:24:03.070104Z","time spent":"959.749238ms","remote":"127.0.0.1:56706","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" mod_revision:488 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" > >"}
	
	
	==> kernel <==
	 20:24:08 up 1 min,  0 users,  load average: 0.88, 0.38, 0.14
	Linux pause-722044 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b] <==
	I1217 20:23:35.282600       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1217 20:23:35.282609       1 controller.go:170] Shutting down OpenAPI controller
	I1217 20:23:35.282864       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I1217 20:23:35.282877       1 cluster_authentication_trust_controller.go:482] Shutting down cluster_authentication_trust_controller controller
	I1217 20:23:35.282886       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	I1217 20:23:35.282904       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1217 20:23:35.282943       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1217 20:23:35.284615       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 20:23:35.284713       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 20:23:35.286651       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 20:23:35.286713       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 20:23:35.285436       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1217 20:23:35.285464       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1217 20:23:35.287619       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1217 20:23:35.285537       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1217 20:23:35.287889       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 20:23:35.285529       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1217 20:23:35.285559       1 controller.go:157] Shutting down quota evaluator
	I1217 20:23:35.288869       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.286553       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1217 20:23:35.288948       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.288974       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.288991       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.289005       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.286637       1 secure_serving.go:259] Stopped listening on [::]:8443
	
	
	==> kube-apiserver [7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904] <==
	I1217 20:23:52.853899       1 policy_source.go:240] refreshing policies
	I1217 20:23:52.854842       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:23:52.854904       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:23:52.854945       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 20:23:52.855061       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 20:23:52.855149       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 20:23:52.871829       1 aggregator.go:171] initial CRD sync complete...
	I1217 20:23:52.872024       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:23:52.872136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:23:52.872241       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:23:52.873572       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:23:52.871882       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:23:52.928567       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:23:52.928894       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:23:52.931343       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:23:53.522503       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:23:53.646839       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1217 20:23:54.275705       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.108]
	I1217 20:23:54.277468       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:23:54.288358       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:23:54.839055       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:23:54.932223       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 20:23:55.008727       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:23:55.017646       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:23:57.448426       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb] <==
	I1217 20:23:31.163701       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 20:23:31.163715       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 20:23:31.165076       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 20:23:31.165120       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 20:23:31.165447       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 20:23:31.166669       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 20:23:31.166829       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 20:23:31.168235       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 20:23:31.169375       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 20:23:31.171712       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:23:31.171761       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:23:31.174081       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 20:23:31.174212       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:23:31.175388       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 20:23:31.196678       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 20:23:31.196743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:23:31.200021       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 20:23:31.208383       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 20:23:31.209649       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 20:23:31.212972       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 20:23:31.213024       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 20:23:31.213776       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 20:23:31.214153       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 20:23:31.214241       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:23:31.216106       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-controller-manager [e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0] <==
	I1217 20:23:56.161240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 20:23:56.164017       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 20:23:56.166774       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 20:23:56.169121       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 20:23:56.170886       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 20:23:56.176090       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 20:23:56.176554       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 20:23:56.177265       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-722044"
	I1217 20:23:56.177531       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 20:23:56.180553       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 20:23:56.182717       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:23:56.182765       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 20:23:56.183484       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 20:23:56.183894       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:23:56.184996       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 20:23:56.185369       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:23:56.186256       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 20:23:56.186824       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 20:23:56.190880       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 20:23:56.191029       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:23:56.191234       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 20:23:56.191241       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 20:23:56.197506       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 20:23:56.197516       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 20:23:56.202548       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-proxy [288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17] <==
	I1217 20:23:25.948130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:23:27.892904       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:23:27.893039       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.108"]
	E1217 20:23:27.893155       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:23:28.137499       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 20:23:28.137589       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 20:23:28.137621       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:23:28.165753       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:23:28.171091       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:23:28.171261       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:23:28.175615       1 config.go:200] "Starting service config controller"
	I1217 20:23:28.175995       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:23:28.176131       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:23:28.176144       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:23:28.176626       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:23:28.176805       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:23:28.182606       1 config.go:309] "Starting node config controller"
	I1217 20:23:28.182678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:23:28.182687       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:23:28.277244       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:23:28.277279       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:23:28.277304       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd] <==
	I1217 20:23:54.176826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:23:54.280292       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:23:54.280368       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.108"]
	E1217 20:23:54.280436       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:23:54.338119       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 20:23:54.338283       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 20:23:54.338337       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:23:54.351701       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:23:54.352088       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:23:54.352131       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:23:54.358961       1 config.go:200] "Starting service config controller"
	I1217 20:23:54.359002       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:23:54.359030       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:23:54.359035       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:23:54.359051       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:23:54.359056       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:23:54.360087       1 config.go:309] "Starting node config controller"
	I1217 20:23:54.360125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:23:54.360133       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:23:54.459375       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:23:54.459404       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:23:54.459423       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9] <==
	I1217 20:23:25.636160       1 serving.go:386] Generated self-signed cert in-memory
	W1217 20:23:27.854593       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:23:27.854633       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 20:23:27.854666       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:23:27.854680       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:23:27.894521       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 20:23:27.894614       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:23:27.897232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:27.897961       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:27.897975       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:23:27.898055       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:23:27.999493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:45.804886       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 20:23:45.804938       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 20:23:45.804981       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 20:23:45.805108       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6] <==
	I1217 20:23:50.242871       1 serving.go:386] Generated self-signed cert in-memory
	W1217 20:23:52.754973       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:23:52.755015       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 20:23:52.755026       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:23:52.755032       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:23:52.824147       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 20:23:52.824298       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:23:52.832375       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:52.832522       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:52.833264       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:23:52.834629       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 20:23:52.848025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 20:23:52.934099       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.068556    3955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-722044\" not found" node="pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.686604    3955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-722044\" not found" node="pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.763109    3955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.917479    3955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-722044\" already exists" pod="kube-system/etcd-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.917587    3955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.919840    3955 kubelet_node_status.go:124] "Node was previously registered" node="pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.920339    3955 kubelet_node_status.go:78] "Successfully registered node" node="pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.920579    3955 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.923757    3955 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.938341    3955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-722044\" already exists" pod="kube-system/kube-apiserver-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.938448    3955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.954906    3955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-722044\" already exists" pod="kube-system/kube-controller-manager-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.954946    3955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.972059    3955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-722044\" already exists" pod="kube-system/kube-scheduler-pause-722044"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.424312    3955 apiserver.go:52] "Watching apiserver"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.464420    3955 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.519342    3955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24049acb-98c2-425b-b662-917a0f36e924-xtables-lock\") pod \"kube-proxy-snthq\" (UID: \"24049acb-98c2-425b-b662-917a0f36e924\") " pod="kube-system/kube-proxy-snthq"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.519395    3955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24049acb-98c2-425b-b662-917a0f36e924-lib-modules\") pod \"kube-proxy-snthq\" (UID: \"24049acb-98c2-425b-b662-917a0f36e924\") " pod="kube-system/kube-proxy-snthq"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.734504    3955 scope.go:117] "RemoveContainer" containerID="3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.734850    3955 scope.go:117] "RemoveContainer" containerID="288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17"
	Dec 17 20:23:57 pause-722044 kubelet[3955]: I1217 20:23:57.410149    3955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 20:23:58 pause-722044 kubelet[3955]: E1217 20:23:58.603675    3955 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766003038602446313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 17 20:23:58 pause-722044 kubelet[3955]: E1217 20:23:58.603704    3955 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766003038602446313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 17 20:24:08 pause-722044 kubelet[3955]: E1217 20:24:08.606405    3955 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766003048605770866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 17 20:24:08 pause-722044 kubelet[3955]: E1217 20:24:08.606429    3955 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766003048605770866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-722044 -n pause-722044
helpers_test.go:270: (dbg) Run:  kubectl --context pause-722044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-722044 -n pause-722044
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-722044 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-722044 logs -n 25: (1.545678558s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p cert-options-597207                                                                                                                                      │ cert-options-597207       │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │ 17 Dec 25 20:19 UTC │
	│ ssh     │ -p NoKubernetes-680060 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │                     │
	│ start   │ -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │ 17 Dec 25 20:20 UTC │
	│ stop    │ -p NoKubernetes-680060                                                                                                                                      │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │ 17 Dec 25 20:19 UTC │
	│ start   │ -p NoKubernetes-680060 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:19 UTC │ 17 Dec 25 20:20 UTC │
	│ start   │ -p running-upgrade-824542 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-824542    │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-813074                                                                                                                                │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │ 17 Dec 25 20:20 UTC │
	│ ssh     │ -p NoKubernetes-680060 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │                     │
	│ delete  │ -p NoKubernetes-680060                                                                                                                                      │ NoKubernetes-680060       │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │ 17 Dec 25 20:20 UTC │
	│ start   │ -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                 │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:20 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p stopped-upgrade-897195 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ stopped-upgrade-897195    │ jenkins │ v1.35.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ start   │ -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                 │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:22 UTC │
	│ stop    │ stopped-upgrade-897195 stop                                                                                                                                 │ stopped-upgrade-897195    │ jenkins │ v1.35.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p stopped-upgrade-897195 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-897195    │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:22 UTC │
	│ delete  │ -p kubernetes-upgrade-813074                                                                                                                                │ kubernetes-upgrade-813074 │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:22 UTC │
	│ start   │ -p pause-722044 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-722044              │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:23 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-897195 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-897195    │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │                     │
	│ delete  │ -p stopped-upgrade-897195                                                                                                                                   │ stopped-upgrade-897195    │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:22 UTC │
	│ start   │ -p auto-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-698465               │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:23 UTC │
	│ start   │ -p cert-expiration-229742 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-229742    │ jenkins │ v1.37.0 │ 17 Dec 25 20:22 UTC │ 17 Dec 25 20:23 UTC │
	│ start   │ -p pause-722044 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-722044              │ jenkins │ v1.37.0 │ 17 Dec 25 20:23 UTC │ 17 Dec 25 20:24 UTC │
	│ delete  │ -p cert-expiration-229742                                                                                                                                   │ cert-expiration-229742    │ jenkins │ v1.37.0 │ 17 Dec 25 20:23 UTC │ 17 Dec 25 20:23 UTC │
	│ start   │ -p kindnet-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-698465            │ jenkins │ v1.37.0 │ 17 Dec 25 20:23 UTC │                     │
	│ ssh     │ -p auto-698465 pgrep -a kubelet                                                                                                                             │ auto-698465               │ jenkins │ v1.37.0 │ 17 Dec 25 20:23 UTC │ 17 Dec 25 20:23 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:23:37
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:23:37.738608   41406 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:23:37.738726   41406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:23:37.738734   41406 out.go:374] Setting ErrFile to fd 2...
	I1217 20:23:37.738738   41406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:23:37.738923   41406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:23:37.739377   41406 out.go:368] Setting JSON to false
	I1217 20:23:37.740230   41406 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3957,"bootTime":1765999061,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:23:37.740291   41406 start.go:143] virtualization: kvm guest
	I1217 20:23:37.742214   41406 out.go:179] * [kindnet-698465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:23:37.743374   41406 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:23:37.743371   41406 notify.go:221] Checking for updates...
	I1217 20:23:37.744574   41406 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:23:37.745743   41406 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 20:23:37.746882   41406 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 20:23:37.747977   41406 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:23:37.752694   41406 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:23:37.754075   41406 config.go:182] Loaded profile config "auto-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:37.754180   41406 config.go:182] Loaded profile config "guest-867309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1217 20:23:37.754316   41406 config.go:182] Loaded profile config "pause-722044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:37.754423   41406 config.go:182] Loaded profile config "running-upgrade-824542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 20:23:37.754541   41406 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:23:37.788856   41406 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 20:23:37.789762   41406 start.go:309] selected driver: kvm2
	I1217 20:23:37.789776   41406 start.go:927] validating driver "kvm2" against <nil>
	I1217 20:23:37.789787   41406 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:23:37.790597   41406 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:23:37.790828   41406 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:23:37.790853   41406 cni.go:84] Creating CNI manager for "kindnet"
	I1217 20:23:37.790858   41406 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:23:37.790896   41406 start.go:353] cluster config:
	{Name:kindnet-698465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:23:37.790977   41406 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:23:37.792213   41406 out.go:179] * Starting "kindnet-698465" primary control-plane node in "kindnet-698465" cluster
	I1217 20:23:37.793120   41406 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:23:37.793150   41406 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:23:37.793160   41406 cache.go:65] Caching tarball of preloaded images
	I1217 20:23:37.793243   41406 preload.go:238] Found /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:23:37.793261   41406 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:23:37.793336   41406 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/config.json ...
	I1217 20:23:37.793354   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/config.json: {Name:mk1a7b2e322d257130e0cb198c67e12a9ac9a0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:23:37.793479   41406 start.go:360] acquireMachinesLock for kindnet-698465: {Name:mk03890d04d41d66ccbc23571d0f065ba20ffda0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 20:23:37.793515   41406 start.go:364] duration metric: took 22.259µs to acquireMachinesLock for "kindnet-698465"
	I1217 20:23:37.793567   41406 start.go:93] Provisioning new machine with config: &{Name:kindnet-698465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.3 ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:23:37.793631   41406 start.go:125] createHost starting for "" (driver="kvm2")
	W1217 20:23:35.856116   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	W1217 20:23:38.357022   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	I1217 20:23:35.858233   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:35.858254   39298 cri.go:89] found id: ""
	I1217 20:23:35.858264   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:35.858327   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.863872   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:35.863941   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:35.912058   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:35.912082   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:35.912088   39298 cri.go:89] found id: ""
	I1217 20:23:35.912097   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:35.912152   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.917700   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.923109   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:35.923189   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:35.968314   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:35.968346   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:35.968353   39298 cri.go:89] found id: ""
	I1217 20:23:35.968362   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:35.968423   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.974010   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:35.979201   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:35.979278   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:36.024228   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:36.024253   39298 cri.go:89] found id: ""
	I1217 20:23:36.024263   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:36.024324   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:36.028787   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:36.028856   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:36.077010   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:36.077032   39298 cri.go:89] found id: ""
	I1217 20:23:36.077041   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:36.077098   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:36.081463   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:36.081539   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:36.129947   39298 cri.go:89] found id: ""
	I1217 20:23:36.129981   39298 logs.go:282] 0 containers: []
	W1217 20:23:36.129991   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:36.129999   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:36.130062   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:36.165790   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:36.165819   39298 cri.go:89] found id: ""
	I1217 20:23:36.165830   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:36.165893   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:36.170698   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:36.170772   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:36.225282   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:36.225311   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:36.261773   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:36.261810   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:36.303724   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:36.303752   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:36.388426   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:36.388457   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:36.437348   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:36.437376   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:36.503779   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:36.503806   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:36.600383   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:36.600421   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:36.617428   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:36.617467   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:36.703325   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:36.703347   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:36.703363   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:36.749987   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:36.750017   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:36.793183   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:36.793212   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:36.839944   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:36.839992   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:37.203837   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:37.203883   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:39.750223   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:39.750920   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:39.750973   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:39.751020   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:39.800091   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:39.800128   39298 cri.go:89] found id: ""
	I1217 20:23:39.800138   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:39.800223   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.804642   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:39.804696   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:39.845637   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:39.845666   39298 cri.go:89] found id: ""
	I1217 20:23:39.845677   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:39.845753   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.850412   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:39.850491   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:39.893682   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:39.893710   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:39.893718   39298 cri.go:89] found id: ""
	I1217 20:23:39.893729   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:39.893800   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.898824   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.903398   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:39.903459   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:39.948298   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:39.948323   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:39.948328   39298 cri.go:89] found id: ""
	I1217 20:23:39.948338   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:39.948406   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.952874   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:39.957248   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:39.957323   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:39.997279   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:39.997309   39298 cri.go:89] found id: ""
	I1217 20:23:39.997324   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:39.997401   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:40.002236   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:40.002315   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:40.046874   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:40.046898   39298 cri.go:89] found id: ""
	I1217 20:23:40.046908   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:40.046980   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:40.051413   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:40.051479   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:40.091909   39298 cri.go:89] found id: ""
	I1217 20:23:40.091962   39298 logs.go:282] 0 containers: []
	W1217 20:23:40.091976   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:40.091984   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:40.092056   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:40.129051   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:40.129071   39298 cri.go:89] found id: ""
	I1217 20:23:40.129081   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:40.129148   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:40.133839   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:40.133864   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:40.253793   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:40.253838   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:40.338176   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:40.338207   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:40.338227   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:40.379739   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:40.379772   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:40.418780   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:40.418808   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:40.779235   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:40.779299   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:40.828209   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:40.828240   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:40.845344   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:40.845392   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:37.794871   41406 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1217 20:23:37.795027   41406 start.go:159] libmachine.API.Create for "kindnet-698465" (driver="kvm2")
	I1217 20:23:37.795055   41406 client.go:173] LocalClient.Create starting
	I1217 20:23:37.795124   41406 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem
	I1217 20:23:37.795152   41406 main.go:143] libmachine: Decoding PEM data...
	I1217 20:23:37.795170   41406 main.go:143] libmachine: Parsing certificate...
	I1217 20:23:37.795212   41406 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem
	I1217 20:23:37.795231   41406 main.go:143] libmachine: Decoding PEM data...
	I1217 20:23:37.795241   41406 main.go:143] libmachine: Parsing certificate...
	I1217 20:23:37.795562   41406 main.go:143] libmachine: creating domain...
	I1217 20:23:37.795573   41406 main.go:143] libmachine: creating network...
	I1217 20:23:37.796841   41406 main.go:143] libmachine: found existing default network
	I1217 20:23:37.797062   41406 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 20:23:37.797807   41406 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:4c:e0} reservation:<nil>}
	I1217 20:23:37.798686   41406 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dbc5d0}
	I1217 20:23:37.798766   41406 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-698465</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 20:23:37.803402   41406 main.go:143] libmachine: creating private network mk-kindnet-698465 192.168.50.0/24...
	I1217 20:23:37.875031   41406 main.go:143] libmachine: private network mk-kindnet-698465 192.168.50.0/24 created
	I1217 20:23:37.875288   41406 main.go:143] libmachine: <network>
	  <name>mk-kindnet-698465</name>
	  <uuid>5e64cb0f-024e-4f76-9dbe-2ee91a5ae9ff</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:cb:50:4f'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 20:23:37.875315   41406 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465 ...
	I1217 20:23:37.875336   41406 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1217 20:23:37.875359   41406 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 20:23:37.875424   41406 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22186-3611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
	I1217 20:23:38.131543   41406 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa...
	I1217 20:23:38.261413   41406 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/kindnet-698465.rawdisk...
	I1217 20:23:38.261456   41406 main.go:143] libmachine: Writing magic tar header
	I1217 20:23:38.261504   41406 main.go:143] libmachine: Writing SSH key tar header
	I1217 20:23:38.261627   41406 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465 ...
	I1217 20:23:38.261715   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465
	I1217 20:23:38.261760   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465 (perms=drwx------)
	I1217 20:23:38.261785   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines
	I1217 20:23:38.261803   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines (perms=drwxr-xr-x)
	I1217 20:23:38.261821   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 20:23:38.261840   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube (perms=drwxr-xr-x)
	I1217 20:23:38.261856   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611
	I1217 20:23:38.261874   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611 (perms=drwxrwxr-x)
	I1217 20:23:38.261890   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 20:23:38.261907   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 20:23:38.261921   41406 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 20:23:38.261938   41406 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 20:23:38.261952   41406 main.go:143] libmachine: checking permissions on dir: /home
	I1217 20:23:38.261964   41406 main.go:143] libmachine: skipping /home - not owner
	I1217 20:23:38.261971   41406 main.go:143] libmachine: defining domain...
	I1217 20:23:38.263156   41406 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-698465</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/kindnet-698465.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-698465'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 20:23:38.268225   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:02:e9:88 in network default
	I1217 20:23:38.268938   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:38.268963   41406 main.go:143] libmachine: starting domain...
	I1217 20:23:38.268970   41406 main.go:143] libmachine: ensuring networks are active...
	I1217 20:23:38.269763   41406 main.go:143] libmachine: Ensuring network default is active
	I1217 20:23:38.270176   41406 main.go:143] libmachine: Ensuring network mk-kindnet-698465 is active
	I1217 20:23:38.270986   41406 main.go:143] libmachine: getting domain XML...
	I1217 20:23:38.272202   41406 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-698465</name>
	  <uuid>83b10cb7-b452-4767-a21f-3f78a8d775fb</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/kindnet-698465.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:8d:a5:2d'/>
	      <source network='mk-kindnet-698465'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:02:e9:88'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 20:23:39.565207   41406 main.go:143] libmachine: waiting for domain to start...
	I1217 20:23:39.566635   41406 main.go:143] libmachine: domain is now running
	I1217 20:23:39.566652   41406 main.go:143] libmachine: waiting for IP...
	I1217 20:23:39.567329   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:39.568150   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:39.568166   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:39.568510   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:39.568580   41406 retry.go:31] will retry after 200.627988ms: waiting for domain to come up
	I1217 20:23:39.770818   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:39.771511   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:39.771533   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:39.771871   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:39.771901   41406 retry.go:31] will retry after 301.782833ms: waiting for domain to come up
	I1217 20:23:40.075374   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:40.076066   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:40.076092   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:40.076425   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:40.076470   41406 retry.go:31] will retry after 341.853479ms: waiting for domain to come up
	I1217 20:23:40.420366   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:40.421292   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:40.421320   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:40.421773   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:40.421817   41406 retry.go:31] will retry after 393.806601ms: waiting for domain to come up
	I1217 20:23:40.817400   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:40.818221   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:40.818236   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:40.818613   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:40.818648   41406 retry.go:31] will retry after 466.434322ms: waiting for domain to come up
	I1217 20:23:41.286398   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:41.287197   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:41.287218   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:41.287619   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:41.287656   41406 retry.go:31] will retry after 724.641469ms: waiting for domain to come up
	I1217 20:23:42.013423   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:42.014031   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:42.014049   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:42.014430   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:42.014472   41406 retry.go:31] will retry after 798.648498ms: waiting for domain to come up
	W1217 20:23:40.856943   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	W1217 20:23:43.354458   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	I1217 20:23:40.895686   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:40.895717   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:40.942230   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:40.942259   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:40.987633   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:40.987663   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:41.031428   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:41.031454   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:41.113325   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:41.113371   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:41.163589   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:41.163621   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:43.706658   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:43.707367   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:43.707429   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:43.707486   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:43.746181   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:43.746208   39298 cri.go:89] found id: ""
	I1217 20:23:43.746219   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:43.746288   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.750886   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:43.750968   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:43.791179   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:43.791206   39298 cri.go:89] found id: ""
	I1217 20:23:43.791216   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:43.791281   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.795616   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:43.795684   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:43.843187   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:43.843215   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:43.843220   39298 cri.go:89] found id: ""
	I1217 20:23:43.843229   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:43.843307   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.848566   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.853943   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:43.854021   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:43.894724   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:43.894749   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:43.894756   39298 cri.go:89] found id: ""
	I1217 20:23:43.894765   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:43.894838   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.900093   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.904373   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:43.904435   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:43.943546   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:43.943567   39298 cri.go:89] found id: ""
	I1217 20:23:43.943576   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:43.943636   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:43.948687   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:43.948758   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:43.995515   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:43.995564   39298 cri.go:89] found id: ""
	I1217 20:23:43.995577   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:43.995666   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:44.000435   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:44.000511   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:44.038061   39298 cri.go:89] found id: ""
	I1217 20:23:44.038093   39298 logs.go:282] 0 containers: []
	W1217 20:23:44.038106   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:44.038113   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:44.038183   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:44.075102   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:44.075130   39298 cri.go:89] found id: ""
	I1217 20:23:44.075141   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:44.075203   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:44.079820   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:44.079850   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:44.121112   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:44.121148   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:44.160578   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:44.160612   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:44.203545   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:44.203586   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:44.245166   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:44.245195   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:44.284998   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:44.285027   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:44.323309   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:44.323344   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:44.662235   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:44.662274   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:44.709983   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:44.710014   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:44.811658   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:44.811693   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:44.828354   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:44.828418   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:44.911962   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:44.911996   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:44.912013   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:44.964466   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:44.964497   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:45.039251   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:45.039297   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:45.950276   41240 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd 00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b 288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17 1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506 96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9 d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb 458be37c9164dc0dfbd59b0b8dcd61a892bf0878a72ca6f6387f5b534e8724ca 57b5dad3a6eb199d74ed65b35e1c272c026349deacd961f4b0ab358df4b1767a 4d2d7a7c7ff3887b933004ee0d6287b3244e6c54069ad29faa074d1cf1e142fa 4ecea3017ab351f428174d13d98abba7177414280659de95b9d0c5042ef461cb e6ead26278179f9e5597d5e890d711d7382a9ccec643ef3635c4f23c71576ee7: (20.44360538s)
	W1217 20:23:45.950400   41240 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd 00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b 288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17 1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506 96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9 d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb 458be37c9164dc0dfbd59b0b8dcd61a892bf0878a72ca6f6387f5b534e8724ca 57b5dad3a6eb199d74ed65b35e1c272c026349deacd961f4b0ab358df4b1767a 4d2d7a7c7ff3887b933004ee0d6287b3244e6c54069ad29faa074d1cf1e142fa 4ecea3017ab351f428174d13d98abba7177414280659de95b9d0c5042ef461cb e6ead26278179f9e5597d5e890d711d7382a9ccec643ef3635c4f23c71576ee7: Process exited with status 1
	stdout:
	3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd
	00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b
	288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17
	1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506
	96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb
	eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9
	
	stderr:
	E1217 20:23:45.942769    3638 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb\": container with ID starting with d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb not found: ID does not exist" containerID="d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb"
	time="2025-12-17T20:23:45Z" level=fatal msg="stopping the container \"d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb\": rpc error: code = NotFound desc = could not find container \"d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb\": container with ID starting with d3aa5bb79009bf7c5283af9edad023fa9e3089576286be67bfc0903b30de2deb not found: ID does not exist"
	I1217 20:23:45.950474   41240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 20:23:45.992857   41240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:23:46.007688   41240 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 20:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5642 Dec 17 20:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Dec 17 20:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5586 Dec 17 20:22 /etc/kubernetes/scheduler.conf
	
	I1217 20:23:46.007761   41240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:23:46.021626   41240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:23:46.035047   41240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:23:46.035120   41240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:23:46.049538   41240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:23:46.061208   41240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:23:46.061279   41240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:23:46.073804   41240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:23:46.085256   41240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:23:46.085322   41240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:23:46.099589   41240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:23:46.112608   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:46.172057   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:42.814555   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:42.815167   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:42.815186   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:42.815494   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:42.815553   41406 retry.go:31] will retry after 940.04333ms: waiting for domain to come up
	I1217 20:23:43.757872   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:43.758511   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:43.758535   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:43.758857   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:43.758889   41406 retry.go:31] will retry after 1.733677818s: waiting for domain to come up
	I1217 20:23:45.494104   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:45.494887   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:45.494909   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:45.495262   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:45.495300   41406 retry.go:31] will retry after 2.310490865s: waiting for domain to come up
	W1217 20:23:45.356222   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	W1217 20:23:47.357188   40822 pod_ready.go:104] pod "coredns-66bc5c9577-z6sfq" is not "Ready", error: <nil>
	I1217 20:23:49.389718   40822 pod_ready.go:94] pod "coredns-66bc5c9577-z6sfq" is "Ready"
	I1217 20:23:49.389753   40822 pod_ready.go:86] duration metric: took 20.040719653s for pod "coredns-66bc5c9577-z6sfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.429169   40822 pod_ready.go:83] waiting for pod "etcd-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.444588   40822 pod_ready.go:94] pod "etcd-auto-698465" is "Ready"
	I1217 20:23:49.444636   40822 pod_ready.go:86] duration metric: took 15.432483ms for pod "etcd-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.448068   40822 pod_ready.go:83] waiting for pod "kube-apiserver-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.455431   40822 pod_ready.go:94] pod "kube-apiserver-auto-698465" is "Ready"
	I1217 20:23:49.455463   40822 pod_ready.go:86] duration metric: took 7.366193ms for pod "kube-apiserver-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.459776   40822 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.552523   40822 pod_ready.go:94] pod "kube-controller-manager-auto-698465" is "Ready"
	I1217 20:23:49.552574   40822 pod_ready.go:86] duration metric: took 92.769387ms for pod "kube-controller-manager-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:49.754543   40822 pod_ready.go:83] waiting for pod "kube-proxy-hmgj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:50.154099   40822 pod_ready.go:94] pod "kube-proxy-hmgj9" is "Ready"
	I1217 20:23:50.154127   40822 pod_ready.go:86] duration metric: took 399.552989ms for pod "kube-proxy-hmgj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:50.353684   40822 pod_ready.go:83] waiting for pod "kube-scheduler-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:50.754231   40822 pod_ready.go:94] pod "kube-scheduler-auto-698465" is "Ready"
	I1217 20:23:50.754267   40822 pod_ready.go:86] duration metric: took 400.516361ms for pod "kube-scheduler-auto-698465" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:50.754284   40822 pod_ready.go:40] duration metric: took 31.413609777s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:23:50.819003   40822 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:23:50.820791   40822 out.go:179] * Done! kubectl is now configured to use "auto-698465" cluster and "default" namespace by default
	I1217 20:23:47.583607   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:47.584283   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:47.584334   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:47.584387   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:47.641093   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:47.641127   39298 cri.go:89] found id: ""
	I1217 20:23:47.641138   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:47.641207   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.645555   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:47.645639   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:47.687880   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:47.687904   39298 cri.go:89] found id: ""
	I1217 20:23:47.687913   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:47.687978   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.692490   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:47.692582   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:47.735855   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:47.735879   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:47.735884   39298 cri.go:89] found id: ""
	I1217 20:23:47.735894   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:47.735957   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.742277   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.746754   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:47.746829   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:47.796489   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:47.796514   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:47.796520   39298 cri.go:89] found id: ""
	I1217 20:23:47.796540   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:47.796614   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.801804   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.806180   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:47.806258   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:47.846729   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:47.846758   39298 cri.go:89] found id: ""
	I1217 20:23:47.846769   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:47.846832   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.852671   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:47.852744   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:47.900186   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:47.900226   39298 cri.go:89] found id: ""
	I1217 20:23:47.900237   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:47.900302   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:47.905074   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:47.905162   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:47.951252   39298 cri.go:89] found id: ""
	I1217 20:23:47.951285   39298 logs.go:282] 0 containers: []
	W1217 20:23:47.951298   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:47.951307   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:47.951368   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:48.013107   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:48.013136   39298 cri.go:89] found id: ""
	I1217 20:23:48.013147   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:48.013212   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:48.018857   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:48.018884   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:48.066746   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:48.066785   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:48.119816   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:48.119852   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:48.137355   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:48.137387   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:48.193230   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:48.193281   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:48.244460   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:48.244490   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:48.290486   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:48.290555   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:48.804701   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:48.804762   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:48.853757   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:48.853796   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:48.975214   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:48.975256   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:49.060053   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:49.060085   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:49.060106   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:49.109489   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:49.109542   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:49.166484   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:49.166547   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:49.277669   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:49.277719   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:47.892999   41240 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.720893614s)
	I1217 20:23:47.893078   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:48.308106   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:48.383941   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:48.504752   41240 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:23:48.504848   41240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:23:49.005152   41240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:23:49.505105   41240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:23:49.560108   41240 api_server.go:72] duration metric: took 1.055385553s to wait for apiserver process to appear ...
	I1217 20:23:49.560140   41240 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:23:49.560163   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:49.560753   41240 api_server.go:269] stopped: https://192.168.61.108:8443/healthz: Get "https://192.168.61.108:8443/healthz": dial tcp 192.168.61.108:8443: connect: connection refused
	I1217 20:23:50.060310   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:47.808168   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:47.809041   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:47.809107   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:47.809610   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:47.809650   41406 retry.go:31] will retry after 2.388899192s: waiting for domain to come up
	I1217 20:23:50.199766   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:50.200512   41406 main.go:143] libmachine: no network interface addresses found for domain kindnet-698465 (source=lease)
	I1217 20:23:50.200560   41406 main.go:143] libmachine: trying to list again with source=arp
	I1217 20:23:50.200957   41406 main.go:143] libmachine: unable to find current IP address of domain kindnet-698465 in network mk-kindnet-698465 (interfaces detected: [])
	I1217 20:23:50.200986   41406 retry.go:31] will retry after 3.596030173s: waiting for domain to come up
	I1217 20:23:52.727895   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:23:52.727921   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:23:52.727934   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:52.778928   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:23:52.779029   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:23:53.060285   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:53.067054   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:23:53.067081   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:23:53.560619   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:53.565072   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:23:53.565101   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:23:54.060604   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:54.079118   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:23:54.079159   41240 api_server.go:103] status: https://192.168.61.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:23:54.560855   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:54.568255   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 200:
	ok
	I1217 20:23:54.576011   41240 api_server.go:141] control plane version: v1.34.3
	I1217 20:23:54.576039   41240 api_server.go:131] duration metric: took 5.015893056s to wait for apiserver health ...
	I1217 20:23:54.576049   41240 cni.go:84] Creating CNI manager for ""
	I1217 20:23:54.576055   41240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 20:23:54.577405   41240 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 20:23:54.579202   41240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 20:23:54.594808   41240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 20:23:54.626028   41240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:23:54.635106   41240 system_pods.go:59] 6 kube-system pods found
	I1217 20:23:54.635144   41240 system_pods.go:61] "coredns-66bc5c9577-7grrd" [7659c433-1b61-45dd-a6ee-14007a1efcda] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:23:54.635154   41240 system_pods.go:61] "etcd-pause-722044" [46516f16-310e-4672-baba-2f07ada89233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:23:54.635164   41240 system_pods.go:61] "kube-apiserver-pause-722044" [90032952-2169-493c-bbf4-a1163465ed8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:23:54.635177   41240 system_pods.go:61] "kube-controller-manager-pause-722044" [a50d4a20-b5cf-4223-a26c-086d8e1e3c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:23:54.635194   41240 system_pods.go:61] "kube-proxy-snthq" [24049acb-98c2-425b-b662-917a0f36e924] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 20:23:54.635202   41240 system_pods.go:61] "kube-scheduler-pause-722044" [5fcf648b-a03c-4a43-85f6-4cec9e10d0b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:23:54.635215   41240 system_pods.go:74] duration metric: took 9.167052ms to wait for pod list to return data ...
	I1217 20:23:54.635226   41240 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:23:54.640424   41240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 20:23:54.640481   41240 node_conditions.go:123] node cpu capacity is 2
	I1217 20:23:54.640501   41240 node_conditions.go:105] duration metric: took 5.268699ms to run NodePressure ...
	I1217 20:23:54.640580   41240 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:23:55.032093   41240 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1217 20:23:55.037193   41240 kubeadm.go:744] kubelet initialised
	I1217 20:23:55.037225   41240 kubeadm.go:745] duration metric: took 5.097811ms waiting for restarted kubelet to initialise ...
	I1217 20:23:55.037247   41240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:23:55.055261   41240 ops.go:34] apiserver oom_adj: -16
	I1217 20:23:55.055288   41240 kubeadm.go:602] duration metric: took 29.665913722s to restartPrimaryControlPlane
	I1217 20:23:55.055301   41240 kubeadm.go:403] duration metric: took 29.820299731s to StartCluster
	I1217 20:23:55.055323   41240 settings.go:142] acquiring lock: {Name:mke3c622f98fffe95e3e848232032c1bad05dc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:23:55.055414   41240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 20:23:55.056440   41240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/kubeconfig: {Name:mk319ed0207c46a4a2ae4d9b320056846508447c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:23:55.056705   41240 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.108 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:23:55.056815   41240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:23:55.057049   41240 config.go:182] Loaded profile config "pause-722044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:55.058621   41240 out.go:179] * Enabled addons: 
	I1217 20:23:55.058623   41240 out.go:179] * Verifying Kubernetes components...
	I1217 20:23:51.842753   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:51.843513   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:51.843607   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:51.843672   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:51.899729   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:51.899759   39298 cri.go:89] found id: ""
	I1217 20:23:51.899769   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:51.899863   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:51.906094   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:51.906166   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:51.962455   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:51.962481   39298 cri.go:89] found id: ""
	I1217 20:23:51.962492   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:51.962573   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:51.968281   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:51.968368   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:52.020810   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:52.020840   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:52.020847   39298 cri.go:89] found id: ""
	I1217 20:23:52.020857   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:52.020920   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.026760   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.031846   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:52.031905   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:52.085122   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:52.085147   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:52.085153   39298 cri.go:89] found id: ""
	I1217 20:23:52.085163   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:52.085229   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.091064   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.096407   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:52.096468   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:52.142038   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:52.142069   39298 cri.go:89] found id: ""
	I1217 20:23:52.142080   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:52.142142   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.146324   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:52.146404   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:52.192321   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:52.192349   39298 cri.go:89] found id: ""
	I1217 20:23:52.192358   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:52.192433   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.198132   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:52.198201   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:52.250395   39298 cri.go:89] found id: ""
	I1217 20:23:52.250430   39298 logs.go:282] 0 containers: []
	W1217 20:23:52.250443   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:52.250451   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:52.250522   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:52.293584   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:52.293618   39298 cri.go:89] found id: ""
	I1217 20:23:52.293631   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:52.293692   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:52.299120   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:52.299146   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:52.346571   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:52.346613   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:52.391290   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:52.391323   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:52.449891   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:52.449926   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:52.493666   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:52.493704   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:52.560755   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:52.560796   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:52.687614   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:52.687668   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:52.806780   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:52.806818   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:52.806835   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:52.863623   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:52.863667   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:52.904336   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:52.904366   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:53.005597   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:53.005637   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:53.057598   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:53.057640   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:53.396904   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:53.396938   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:53.415155   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:53.415186   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:55.059769   41240 addons.go:530] duration metric: took 2.961994ms for enable addons: enabled=[]
	I1217 20:23:55.059792   41240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:23:55.299732   41240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:23:55.320794   41240 node_ready.go:35] waiting up to 6m0s for node "pause-722044" to be "Ready" ...
	I1217 20:23:55.324200   41240 node_ready.go:49] node "pause-722044" is "Ready"
	I1217 20:23:55.324238   41240 node_ready.go:38] duration metric: took 3.378287ms for node "pause-722044" to be "Ready" ...
	I1217 20:23:55.324256   41240 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:23:55.324317   41240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:23:55.349627   41240 api_server.go:72] duration metric: took 292.888358ms to wait for apiserver process to appear ...
	I1217 20:23:55.349660   41240 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:23:55.349684   41240 api_server.go:253] Checking apiserver healthz at https://192.168.61.108:8443/healthz ...
	I1217 20:23:55.355183   41240 api_server.go:279] https://192.168.61.108:8443/healthz returned 200:
	ok
	I1217 20:23:55.356167   41240 api_server.go:141] control plane version: v1.34.3
	I1217 20:23:55.356192   41240 api_server.go:131] duration metric: took 6.524574ms to wait for apiserver health ...
	I1217 20:23:55.356203   41240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:23:55.359214   41240 system_pods.go:59] 6 kube-system pods found
	I1217 20:23:55.359266   41240 system_pods.go:61] "coredns-66bc5c9577-7grrd" [7659c433-1b61-45dd-a6ee-14007a1efcda] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:23:55.359287   41240 system_pods.go:61] "etcd-pause-722044" [46516f16-310e-4672-baba-2f07ada89233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:23:55.359301   41240 system_pods.go:61] "kube-apiserver-pause-722044" [90032952-2169-493c-bbf4-a1163465ed8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:23:55.359312   41240 system_pods.go:61] "kube-controller-manager-pause-722044" [a50d4a20-b5cf-4223-a26c-086d8e1e3c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:23:55.359321   41240 system_pods.go:61] "kube-proxy-snthq" [24049acb-98c2-425b-b662-917a0f36e924] Running
	I1217 20:23:55.359331   41240 system_pods.go:61] "kube-scheduler-pause-722044" [5fcf648b-a03c-4a43-85f6-4cec9e10d0b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:23:55.359348   41240 system_pods.go:74] duration metric: took 3.130779ms to wait for pod list to return data ...
	I1217 20:23:55.359360   41240 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:23:55.361768   41240 default_sa.go:45] found service account: "default"
	I1217 20:23:55.361784   41240 default_sa.go:55] duration metric: took 2.41855ms for default service account to be created ...
	I1217 20:23:55.361791   41240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:23:55.365589   41240 system_pods.go:86] 6 kube-system pods found
	I1217 20:23:55.365625   41240 system_pods.go:89] "coredns-66bc5c9577-7grrd" [7659c433-1b61-45dd-a6ee-14007a1efcda] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:23:55.365636   41240 system_pods.go:89] "etcd-pause-722044" [46516f16-310e-4672-baba-2f07ada89233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:23:55.365645   41240 system_pods.go:89] "kube-apiserver-pause-722044" [90032952-2169-493c-bbf4-a1163465ed8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:23:55.365655   41240 system_pods.go:89] "kube-controller-manager-pause-722044" [a50d4a20-b5cf-4223-a26c-086d8e1e3c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:23:55.365663   41240 system_pods.go:89] "kube-proxy-snthq" [24049acb-98c2-425b-b662-917a0f36e924] Running
	I1217 20:23:55.365674   41240 system_pods.go:89] "kube-scheduler-pause-722044" [5fcf648b-a03c-4a43-85f6-4cec9e10d0b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:23:55.365684   41240 system_pods.go:126] duration metric: took 3.886111ms to wait for k8s-apps to be running ...
	I1217 20:23:55.365694   41240 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:23:55.365746   41240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:23:55.389935   41240 system_svc.go:56] duration metric: took 24.232603ms WaitForService to wait for kubelet
	I1217 20:23:55.389974   41240 kubeadm.go:587] duration metric: took 333.240142ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:23:55.390001   41240 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:23:55.394850   41240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 20:23:55.394881   41240 node_conditions.go:123] node cpu capacity is 2
	I1217 20:23:55.394897   41240 node_conditions.go:105] duration metric: took 4.890051ms to run NodePressure ...
	I1217 20:23:55.394915   41240 start.go:242] waiting for startup goroutines ...
	I1217 20:23:55.394925   41240 start.go:247] waiting for cluster config update ...
	I1217 20:23:55.394940   41240 start.go:256] writing updated cluster config ...
	I1217 20:23:55.395199   41240 ssh_runner.go:195] Run: rm -f paused
	I1217 20:23:55.402851   41240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:23:55.403869   41240 kapi.go:59] client config for pause-722044: &rest.Config{Host:"https://192.168.61.108:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/profiles/pause-722044/client.key", CAFile:"/home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:23:55.408694   41240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7grrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:53.798634   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:53.799460   41406 main.go:143] libmachine: domain kindnet-698465 has current primary IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:53.799483   41406 main.go:143] libmachine: found domain IP: 192.168.50.49
	I1217 20:23:53.799493   41406 main.go:143] libmachine: reserving static IP address...
	I1217 20:23:53.799991   41406 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kindnet-698465", mac: "52:54:00:8d:a5:2d", ip: "192.168.50.49"} in network mk-kindnet-698465
	I1217 20:23:54.017572   41406 main.go:143] libmachine: reserved static IP address 192.168.50.49 for domain kindnet-698465
	I1217 20:23:54.017607   41406 main.go:143] libmachine: waiting for SSH...
	I1217 20:23:54.017616   41406 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 20:23:54.022201   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.023000   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.023037   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.023484   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.023825   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.023843   41406 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 20:23:54.144827   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:23:54.145262   41406 main.go:143] libmachine: domain creation complete
	I1217 20:23:54.147133   41406 machine.go:94] provisionDockerMachine start ...
	I1217 20:23:54.150413   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.151038   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.151074   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.151327   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.151639   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.151656   41406 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:23:54.278159   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 20:23:54.278200   41406 buildroot.go:166] provisioning hostname "kindnet-698465"
	I1217 20:23:54.281927   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.282606   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.282642   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.282923   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.283170   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.283184   41406 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-698465 && echo "kindnet-698465" | sudo tee /etc/hostname
	I1217 20:23:54.484261   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-698465
	
	I1217 20:23:54.487891   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.488323   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.488354   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.488568   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.488824   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.488840   41406 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-698465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-698465/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-698465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:23:54.625374   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:23:54.625406   41406 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22186-3611/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-3611/.minikube}
	I1217 20:23:54.625429   41406 buildroot.go:174] setting up certificates
	I1217 20:23:54.625439   41406 provision.go:84] configureAuth start
	I1217 20:23:54.629036   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.629536   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.629568   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.632680   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.633146   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.633178   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.633357   41406 provision.go:143] copyHostCerts
	I1217 20:23:54.633439   41406 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem, removing ...
	I1217 20:23:54.633459   41406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem
	I1217 20:23:54.633558   41406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem (1082 bytes)
	I1217 20:23:54.633686   41406 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem, removing ...
	I1217 20:23:54.633698   41406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem
	I1217 20:23:54.633772   41406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem (1123 bytes)
	I1217 20:23:54.633881   41406 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem, removing ...
	I1217 20:23:54.633893   41406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem
	I1217 20:23:54.633932   41406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem (1679 bytes)
	I1217 20:23:54.634009   41406 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem org=jenkins.kindnet-698465 san=[127.0.0.1 192.168.50.49 kindnet-698465 localhost minikube]
	I1217 20:23:54.723639   41406 provision.go:177] copyRemoteCerts
	I1217 20:23:54.723701   41406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:23:54.726683   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.727102   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.727127   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.727295   41406 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa Username:docker}
	I1217 20:23:54.815884   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:23:54.852945   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:23:54.892108   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1217 20:23:54.942459   41406 provision.go:87] duration metric: took 317.007707ms to configureAuth
	I1217 20:23:54.942494   41406 buildroot.go:189] setting minikube options for container-runtime
	I1217 20:23:54.942705   41406 config.go:182] Loaded profile config "kindnet-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:23:54.947831   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.949142   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:54.949217   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:54.949694   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:54.950102   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:54.950162   41406 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:23:55.501992   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:23:55.502021   41406 machine.go:97] duration metric: took 1.354869926s to provisionDockerMachine
	I1217 20:23:55.502034   41406 client.go:176] duration metric: took 17.70696905s to LocalClient.Create
	I1217 20:23:55.502054   41406 start.go:167] duration metric: took 17.707026452s to libmachine.API.Create "kindnet-698465"
	I1217 20:23:55.502062   41406 start.go:293] postStartSetup for "kindnet-698465" (driver="kvm2")
	I1217 20:23:55.502074   41406 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:23:55.502149   41406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:23:55.505622   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.506133   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.506168   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.506383   41406 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa Username:docker}
	I1217 20:23:55.603893   41406 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:23:55.609641   41406 info.go:137] Remote host: Buildroot 2025.02
	I1217 20:23:55.609678   41406 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/addons for local assets ...
	I1217 20:23:55.609771   41406 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/files for local assets ...
	I1217 20:23:55.609875   41406 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem -> 75312.pem in /etc/ssl/certs
	I1217 20:23:55.609997   41406 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:23:55.625410   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem --> /etc/ssl/certs/75312.pem (1708 bytes)
	I1217 20:23:55.660468   41406 start.go:296] duration metric: took 158.389033ms for postStartSetup
	I1217 20:23:55.663957   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.664328   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.664361   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.664615   41406 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/config.json ...
	I1217 20:23:55.664821   41406 start.go:128] duration metric: took 17.871178068s to createHost
	I1217 20:23:55.666907   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.667256   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.667277   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.667420   41406 main.go:143] libmachine: Using SSH client type: native
	I1217 20:23:55.667642   41406 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1217 20:23:55.667655   41406 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 20:23:55.780042   41406 main.go:143] libmachine: SSH cmd err, output: <nil>: 1766003035.732721025
	
	I1217 20:23:55.780082   41406 fix.go:216] guest clock: 1766003035.732721025
	I1217 20:23:55.780093   41406 fix.go:229] Guest: 2025-12-17 20:23:55.732721025 +0000 UTC Remote: 2025-12-17 20:23:55.664834065 +0000 UTC m=+17.975072934 (delta=67.88696ms)
	I1217 20:23:55.780117   41406 fix.go:200] guest clock delta is within tolerance: 67.88696ms
	I1217 20:23:55.780125   41406 start.go:83] releasing machines lock for "kindnet-698465", held for 17.986598589s
	I1217 20:23:55.783237   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.783610   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.783635   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.784163   41406 ssh_runner.go:195] Run: cat /version.json
	I1217 20:23:55.784189   41406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:23:55.787185   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.787400   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.787662   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.787703   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.787846   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:55.787880   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:55.787964   41406 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa Username:docker}
	I1217 20:23:55.788172   41406 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/kindnet-698465/id_rsa Username:docker}
	I1217 20:23:55.895316   41406 ssh_runner.go:195] Run: systemctl --version
	I1217 20:23:55.902310   41406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:23:56.064345   41406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:23:56.073259   41406 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:23:56.073354   41406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:23:56.097194   41406 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:23:56.097216   41406 start.go:496] detecting cgroup driver to use...
	I1217 20:23:56.097275   41406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:23:56.119046   41406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:23:56.139583   41406 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:23:56.139659   41406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:23:56.165683   41406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:23:56.192910   41406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:23:56.359925   41406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:23:56.598130   41406 docker.go:234] disabling docker service ...
	I1217 20:23:56.598212   41406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:23:56.623351   41406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:23:56.644291   41406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:23:56.830226   41406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:23:56.991804   41406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:23:57.010078   41406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:23:57.033745   41406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:23:57.033818   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.049968   41406 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:23:57.050050   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.063102   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.076124   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.090003   41406 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:23:57.106032   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.121880   41406 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.149691   41406 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:23:57.163787   41406 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:23:57.176637   41406 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 20:23:57.176705   41406 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 20:23:57.207445   41406 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:23:57.222644   41406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:23:57.375866   41406 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:23:57.522090   41406 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:23:57.522149   41406 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:23:57.527849   41406 start.go:564] Will wait 60s for crictl version
	I1217 20:23:57.527915   41406 ssh_runner.go:195] Run: which crictl
	I1217 20:23:57.532233   41406 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 20:23:57.569507   41406 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 20:23:57.569619   41406 ssh_runner.go:195] Run: crio --version
	I1217 20:23:57.598833   41406 ssh_runner.go:195] Run: crio --version
	I1217 20:23:57.633441   41406 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 20:23:57.636998   41406 main.go:143] libmachine: domain kindnet-698465 has defined MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:57.637415   41406 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:a5:2d", ip: ""} in network mk-kindnet-698465: {Iface:virbr2 ExpiryTime:2025-12-17 21:23:53 +0000 UTC Type:0 Mac:52:54:00:8d:a5:2d Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:kindnet-698465 Clientid:01:52:54:00:8d:a5:2d}
	I1217 20:23:57.637436   41406 main.go:143] libmachine: domain kindnet-698465 has defined IP address 192.168.50.49 and MAC address 52:54:00:8d:a5:2d in network mk-kindnet-698465
	I1217 20:23:57.637642   41406 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1217 20:23:57.642323   41406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:23:57.658925   41406 kubeadm.go:884] updating cluster {Name:kindnet-698465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.3 ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:23:57.659059   41406 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:23:57.659126   41406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:23:57.692027   41406 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1217 20:23:57.692100   41406 ssh_runner.go:195] Run: which lz4
	I1217 20:23:57.696581   41406 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 20:23:57.701516   41406 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 20:23:57.701562   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1217 20:23:55.961508   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:55.962184   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:55.962239   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:55.962313   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:56.002682   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:56.002703   39298 cri.go:89] found id: ""
	I1217 20:23:56.002711   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:56.002764   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.006995   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:56.007063   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:23:56.046438   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:56.046464   39298 cri.go:89] found id: ""
	I1217 20:23:56.046475   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:23:56.046536   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.051233   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:23:56.051294   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:23:56.091451   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:56.091478   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:56.091485   39298 cri.go:89] found id: ""
	I1217 20:23:56.091495   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:23:56.091582   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.096663   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.102451   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:23:56.102512   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:23:56.150921   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:56.150946   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:56.150952   39298 cri.go:89] found id: ""
	I1217 20:23:56.150962   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:23:56.151016   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.155354   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.160882   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:23:56.160949   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:23:56.210950   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:56.210978   39298 cri.go:89] found id: ""
	I1217 20:23:56.210988   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:23:56.211051   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.215843   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:23:56.215930   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:23:56.258049   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:56.258078   39298 cri.go:89] found id: ""
	I1217 20:23:56.258087   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:23:56.258151   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.263428   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:23:56.263518   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:23:56.307346   39298 cri.go:89] found id: ""
	I1217 20:23:56.307386   39298 logs.go:282] 0 containers: []
	W1217 20:23:56.307399   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:23:56.307406   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:23:56.307474   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:23:56.347335   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:56.347382   39298 cri.go:89] found id: ""
	I1217 20:23:56.347392   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:23:56.347458   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:56.353378   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:23:56.353406   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:23:56.396043   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:23:56.396077   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:23:56.442409   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:23:56.442447   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:23:56.485726   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:23:56.485756   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:23:56.527320   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:23:56.527358   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:56.573136   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:23:56.573166   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:23:56.613510   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:23:56.613556   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:23:56.701959   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:23:56.702008   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:23:56.744706   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:23:56.744741   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:23:57.087777   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:23:57.087816   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:23:57.140435   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:23:57.140467   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:23:57.264360   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:23:57.264400   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:23:57.282892   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:23:57.282926   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:23:57.353746   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:23:57.353776   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:23:57.353790   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:23:59.904077   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	I1217 20:23:59.904855   39298 api_server.go:269] stopped: https://192.168.83.130:8443/healthz: Get "https://192.168.83.130:8443/healthz": dial tcp 192.168.83.130:8443: connect: connection refused
	I1217 20:23:59.904919   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:23:59.904968   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:23:59.953562   39298 cri.go:89] found id: "83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:23:59.953590   39298 cri.go:89] found id: ""
	I1217 20:23:59.953601   39298 logs.go:282] 1 containers: [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad]
	I1217 20:23:59.953667   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:23:59.958796   39298 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:23:59.958856   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:24:00.017438   39298 cri.go:89] found id: "07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:24:00.017460   39298 cri.go:89] found id: ""
	I1217 20:24:00.017467   39298 logs.go:282] 1 containers: [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8]
	I1217 20:24:00.017519   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.022089   39298 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:24:00.022166   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:24:00.071894   39298 cri.go:89] found id: "132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:24:00.071924   39298 cri.go:89] found id: "a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:24:00.071931   39298 cri.go:89] found id: ""
	I1217 20:24:00.071941   39298 logs.go:282] 2 containers: [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e]
	I1217 20:24:00.072011   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.079410   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.086456   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:24:00.086545   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:24:00.131603   39298 cri.go:89] found id: "316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:24:00.131630   39298 cri.go:89] found id: "8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:24:00.131637   39298 cri.go:89] found id: ""
	I1217 20:24:00.131645   39298 logs.go:282] 2 containers: [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2]
	I1217 20:24:00.131710   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.137862   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.142438   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:24:00.142509   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:24:00.187259   39298 cri.go:89] found id: "5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:24:00.187287   39298 cri.go:89] found id: ""
	I1217 20:24:00.187298   39298 logs.go:282] 1 containers: [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde]
	I1217 20:24:00.187364   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.193451   39298 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:24:00.193547   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:24:00.247169   39298 cri.go:89] found id: "fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	I1217 20:24:00.247199   39298 cri.go:89] found id: ""
	I1217 20:24:00.247209   39298 logs.go:282] 1 containers: [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda]
	I1217 20:24:00.247270   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.253230   39298 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:24:00.253321   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:24:00.294038   39298 cri.go:89] found id: ""
	I1217 20:24:00.294065   39298 logs.go:282] 0 containers: []
	W1217 20:24:00.294072   39298 logs.go:284] No container was found matching "kindnet"
	I1217 20:24:00.294079   39298 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:24:00.294129   39298 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:24:00.336759   39298 cri.go:89] found id: "b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:24:00.336789   39298 cri.go:89] found id: ""
	I1217 20:24:00.336800   39298 logs.go:282] 1 containers: [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90]
	I1217 20:24:00.336882   39298 ssh_runner.go:195] Run: which crictl
	I1217 20:24:00.343312   39298 logs.go:123] Gathering logs for kube-scheduler [316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e] ...
	I1217 20:24:00.343375   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316730e92f006899311da2b7db52a70a1af5347b30695fe0ee0701552fb45d3e"
	I1217 20:24:00.465960   39298 logs.go:123] Gathering logs for container status ...
	I1217 20:24:00.466008   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:24:00.520650   39298 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:24:00.520678   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:24:00.597412   39298 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:24:00.597440   39298 logs.go:123] Gathering logs for coredns [132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1] ...
	I1217 20:24:00.597457   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 132a752f13bb4aa0b7f4c4a3952fd8bcd1646bc36a90f64fa88db86fe9e5a1f1"
	I1217 20:24:00.647034   39298 logs.go:123] Gathering logs for coredns [a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e] ...
	I1217 20:24:00.647067   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77f0603c36329b4cd5aad7806b42083052532520bb497062511bce0acf1c51e"
	I1217 20:24:00.693994   39298 logs.go:123] Gathering logs for kube-scheduler [8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2] ...
	I1217 20:24:00.694028   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e548ed09b1c7b4919c8383922d76f2df618aea3e9447d8a07140484414f17a2"
	I1217 20:24:00.756516   39298 logs.go:123] Gathering logs for kube-proxy [5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde] ...
	I1217 20:24:00.756568   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5090abfc2374e765792799654f440dfe5957bfc10867a4e7d601eea8862c7dde"
	I1217 20:24:00.838244   39298 logs.go:123] Gathering logs for kube-controller-manager [fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda] ...
	I1217 20:24:00.838277   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc043b9734759e7de86714d563b0d4b858f415e2ab143fac37484904ff95bbda"
	W1217 20:23:57.418926   41240 pod_ready.go:104] pod "coredns-66bc5c9577-7grrd" is not "Ready", error: <nil>
	I1217 20:23:57.918031   41240 pod_ready.go:94] pod "coredns-66bc5c9577-7grrd" is "Ready"
	I1217 20:23:57.918067   41240 pod_ready.go:86] duration metric: took 2.509345108s for pod "coredns-66bc5c9577-7grrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:23:57.921010   41240 pod_ready.go:83] waiting for pod "etcd-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 20:23:59.930426   41240 pod_ready.go:104] pod "etcd-pause-722044" is not "Ready", error: <nil>
	I1217 20:23:59.075028   41406 crio.go:462] duration metric: took 1.378503564s to copy over tarball
	I1217 20:23:59.075105   41406 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 20:24:00.801365   41406 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.72622919s)
	I1217 20:24:00.801397   41406 crio.go:469] duration metric: took 1.726339596s to extract the tarball
	I1217 20:24:00.801405   41406 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 20:24:00.851178   41406 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:24:00.894876   41406 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:24:00.894914   41406 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:24:00.894925   41406 kubeadm.go:935] updating node { 192.168.50.49 8443 v1.34.3 crio true true} ...
	I1217 20:24:00.895043   41406 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-698465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1217 20:24:00.895135   41406 ssh_runner.go:195] Run: crio config
	I1217 20:24:00.953408   41406 cni.go:84] Creating CNI manager for "kindnet"
	I1217 20:24:00.953447   41406 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:24:00.953475   41406 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.49 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-698465 NodeName:kindnet-698465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:24:00.953660   41406 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-698465"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.49"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.49"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:24:00.953739   41406 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:24:00.971394   41406 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:24:00.971471   41406 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:24:00.989398   41406 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1217 20:24:01.018287   41406 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:24:01.044675   41406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1217 20:24:01.067077   41406 ssh_runner.go:195] Run: grep 192.168.50.49	control-plane.minikube.internal$ /etc/hosts
	I1217 20:24:01.071796   41406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:24:01.086400   41406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:24:01.231760   41406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:24:01.272729   41406 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465 for IP: 192.168.50.49
	I1217 20:24:01.272764   41406 certs.go:195] generating shared ca certs ...
	I1217 20:24:01.272781   41406 certs.go:227] acquiring lock for ca certs: {Name:mka9d751f3e3cbcb654d1f1d24f2b10b27bc58a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.272948   41406 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key
	I1217 20:24:01.273001   41406 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key
	I1217 20:24:01.273015   41406 certs.go:257] generating profile certs ...
	I1217 20:24:01.273081   41406 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.key
	I1217 20:24:01.273113   41406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt with IP's: []
	I1217 20:24:01.382323   41406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt ...
	I1217 20:24:01.382354   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: {Name:mk40e4b55da943b02e2b580c004ca615e5767ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.382520   41406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.key ...
	I1217 20:24:01.382543   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.key: {Name:mk017177724a03f6f4e4fa3a06dd7000325479c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.382634   41406 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key.06ebfaef
	I1217 20:24:01.382649   41406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt.06ebfaef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.49]
	I1217 20:24:01.449287   41406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt.06ebfaef ...
	I1217 20:24:01.449313   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt.06ebfaef: {Name:mk47a3c15ce779e642f993485cba2f2f1b770ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.449522   41406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key.06ebfaef ...
	I1217 20:24:01.449570   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key.06ebfaef: {Name:mk804d17be1e550af07ee0c34197db572f23c394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.449713   41406 certs.go:382] copying /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt.06ebfaef -> /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt
	I1217 20:24:01.449845   41406 certs.go:386] copying /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key.06ebfaef -> /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key
	I1217 20:24:01.450015   41406 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.key
	I1217 20:24:01.450045   41406 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.crt with IP's: []
	I1217 20:24:01.479857   41406 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.crt ...
	I1217 20:24:01.479890   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.crt: {Name:mka17d60ef037f9ca717fce55913794601abebf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.480076   41406 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.key ...
	I1217 20:24:01.480092   41406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.key: {Name:mk522bc70fda4b101cdce9cf05149327853db3ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:24:01.480306   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531.pem (1338 bytes)
	W1217 20:24:01.480356   41406 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531_empty.pem, impossibly tiny 0 bytes
	I1217 20:24:01.480366   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:24:01.480393   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:24:01.480415   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:24:01.480450   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem (1679 bytes)
	I1217 20:24:01.480490   41406 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem (1708 bytes)
	I1217 20:24:01.481042   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:24:01.520181   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:24:01.560909   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:24:01.596064   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:24:01.629401   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 20:24:01.660208   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:24:01.695802   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:24:01.729764   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:24:01.765740   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/7531.pem --> /usr/share/ca-certificates/7531.pem (1338 bytes)
	I1217 20:24:01.798192   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/ssl/certs/75312.pem --> /usr/share/ca-certificates/75312.pem (1708 bytes)
	I1217 20:24:01.828349   41406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:24:01.859509   41406 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:24:01.881654   41406 ssh_runner.go:195] Run: openssl version
	I1217 20:24:01.888323   41406 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:24:01.901980   41406 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:24:01.915302   41406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:24:01.921126   41406 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:24:01.921181   41406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:24:01.932239   41406 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:24:01.946269   41406 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:24:01.962070   41406 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7531.pem
	I1217 20:24:01.979232   41406 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7531.pem /etc/ssl/certs/7531.pem
	I1217 20:24:01.994761   41406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7531.pem
	I1217 20:24:02.004561   41406 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/7531.pem
	I1217 20:24:02.004637   41406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7531.pem
	I1217 20:24:02.017106   41406 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:24:02.036258   41406 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7531.pem /etc/ssl/certs/51391683.0
	I1217 20:24:02.050135   41406 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75312.pem
	I1217 20:24:02.063199   41406 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75312.pem /etc/ssl/certs/75312.pem
	I1217 20:24:02.075825   41406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75312.pem
	I1217 20:24:02.081883   41406 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/75312.pem
	I1217 20:24:02.081959   41406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75312.pem
	I1217 20:24:02.089916   41406 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:24:02.104354   41406 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/75312.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:24:02.116469   41406 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:24:02.121460   41406 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:24:02.121523   41406 kubeadm.go:401] StartCluster: {Name:kindnet-698465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:kindnet-698465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:24:02.121623   41406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:24:02.121701   41406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:24:02.158952   41406 cri.go:89] found id: ""
	I1217 20:24:02.159024   41406 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:24:02.174479   41406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:24:02.187264   41406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:24:02.200010   41406 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:24:02.200026   41406 kubeadm.go:158] found existing configuration files:
	
	I1217 20:24:02.200082   41406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:24:02.211352   41406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:24:02.211410   41406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:24:02.224324   41406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:24:02.236206   41406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:24:02.236264   41406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:24:02.250140   41406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:24:02.263210   41406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:24:02.263295   41406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:24:02.275894   41406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:24:02.287687   41406 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:24:02.287758   41406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:24:02.300737   41406 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 20:24:02.354484   41406 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:24:02.354594   41406 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:24:02.470097   41406 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:24:02.470211   41406 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:24:02.470363   41406 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:24:02.484010   41406 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:24:00.892369   39298 logs.go:123] Gathering logs for storage-provisioner [b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90] ...
	I1217 20:24:00.892401   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8d315edce38a6fc251eccc663d1048bab927d765721ccf91098f39439dd3e90"
	I1217 20:24:00.956742   39298 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:24:00.956770   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:24:01.337094   39298 logs.go:123] Gathering logs for kubelet ...
	I1217 20:24:01.337127   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:24:01.446908   39298 logs.go:123] Gathering logs for dmesg ...
	I1217 20:24:01.446958   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:24:01.463439   39298 logs.go:123] Gathering logs for kube-apiserver [83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad] ...
	I1217 20:24:01.463470   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83636ebff083364d3ed28b74e408a7dd6f4875ce6d80d9cbf6b580ee6c1180ad"
	I1217 20:24:01.510072   39298 logs.go:123] Gathering logs for etcd [07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8] ...
	I1217 20:24:01.510104   39298 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07070535d1951f1a33b0b2d27753eba078059255c87d15929ef44d150a89e8d8"
	I1217 20:24:04.073445   39298 api_server.go:253] Checking apiserver healthz at https://192.168.83.130:8443/healthz ...
	W1217 20:24:02.427816   41240 pod_ready.go:104] pod "etcd-pause-722044" is not "Ready", error: <nil>
	W1217 20:24:04.428092   41240 pod_ready.go:104] pod "etcd-pause-722044" is not "Ready", error: <nil>
	I1217 20:24:06.426328   41240 pod_ready.go:94] pod "etcd-pause-722044" is "Ready"
	I1217 20:24:06.426363   41240 pod_ready.go:86] duration metric: took 8.505323532s for pod "etcd-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.429134   41240 pod_ready.go:83] waiting for pod "kube-apiserver-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.433673   41240 pod_ready.go:94] pod "kube-apiserver-pause-722044" is "Ready"
	I1217 20:24:06.433701   41240 pod_ready.go:86] duration metric: took 4.547925ms for pod "kube-apiserver-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.435771   41240 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.440685   41240 pod_ready.go:94] pod "kube-controller-manager-pause-722044" is "Ready"
	I1217 20:24:06.440712   41240 pod_ready.go:86] duration metric: took 4.916476ms for pod "kube-controller-manager-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.443472   41240 pod_ready.go:83] waiting for pod "kube-proxy-snthq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.625267   41240 pod_ready.go:94] pod "kube-proxy-snthq" is "Ready"
	I1217 20:24:06.625293   41240 pod_ready.go:86] duration metric: took 181.802269ms for pod "kube-proxy-snthq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:06.825713   41240 pod_ready.go:83] waiting for pod "kube-scheduler-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:07.226178   41240 pod_ready.go:94] pod "kube-scheduler-pause-722044" is "Ready"
	I1217 20:24:07.226203   41240 pod_ready.go:86] duration metric: took 400.45979ms for pod "kube-scheduler-pause-722044" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:24:07.226213   41240 pod_ready.go:40] duration metric: took 11.823328299s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:24:07.279962   41240 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:24:07.281793   41240 out.go:179] * Done! kubectl is now configured to use "pause-722044" cluster and "default" namespace by default
	I1217 20:24:02.757947   41406 out.go:252]   - Generating certificates and keys ...
	I1217 20:24:02.758105   41406 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:24:02.758179   41406 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:24:02.758236   41406 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:24:03.392356   41406 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:24:03.865938   41406 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:24:05.002288   41406 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:24:05.303955   41406 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:24:05.304103   41406 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-698465 localhost] and IPs [192.168.50.49 127.0.0.1 ::1]
	I1217 20:24:05.727414   41406 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:24:05.727593   41406 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-698465 localhost] and IPs [192.168.50.49 127.0.0.1 ::1]
	I1217 20:24:05.986325   41406 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:24:06.032431   41406 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:24:06.268771   41406 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:24:06.269101   41406 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:24:06.376648   41406 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:24:06.781743   41406 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:24:06.908193   41406 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:24:07.155730   41406 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:24:07.280125   41406 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:24:07.280884   41406 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:24:07.283968   41406 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:24:07.286642   41406 out.go:252]   - Booting up control plane ...
	I1217 20:24:07.286796   41406 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:24:07.286917   41406 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:24:07.287043   41406 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:24:07.307242   41406 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:24:07.307375   41406 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:24:07.320087   41406 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:24:07.320230   41406 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:24:07.320307   41406 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:24:07.551322   41406 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:24:07.551477   41406 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.098133899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766003050098099223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a1df574-2536-4a34-a82a-f3cb7a83ac03 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.100826369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed2bef58-d7c3-4329-8e35-d1d9344b2a47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.101142132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed2bef58-d7c3-4329-8e35-d1d9344b2a47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.101959023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766003033780682122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a7762280f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766003033796784692,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766003028951678240,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b
0c,State:CONTAINER_RUNNING,CreatedAt:1766003028960946916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766003028958737572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766003028929046639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a776228
0f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766003003475507590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766003004534112897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766003003541941060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766003003409790971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash:
5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766003003376876821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766003003277513316,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed2bef58-d7c3-4329-8e35-d1d9344b2a47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.150917677Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2c56865-0e29-4fab-8f19-3b3e78e52a26 name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.151154570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2c56865-0e29-4fab-8f19-3b3e78e52a26 name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.152743256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee6898ea-ea15-4f7c-a59c-d26b313a921b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.153144145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766003050153122862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee6898ea-ea15-4f7c-a59c-d26b313a921b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.154149848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34ee0400-109e-4765-9f6f-0987fb2c8880 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.154281503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34ee0400-109e-4765-9f6f-0987fb2c8880 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.154501149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766003033780682122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a7762280f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766003033796784692,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766003028951678240,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b
0c,State:CONTAINER_RUNNING,CreatedAt:1766003028960946916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766003028958737572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766003028929046639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a776228
0f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766003003475507590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766003004534112897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766003003541941060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766003003409790971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash:
5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766003003376876821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766003003277513316,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34ee0400-109e-4765-9f6f-0987fb2c8880 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.203306446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a597cdfb-38f8-44a0-8e5c-3e124148a3ca name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.203407558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a597cdfb-38f8-44a0-8e5c-3e124148a3ca name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.205002876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3da8ee4-35f6-4903-a884-11f13a0b2506 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.205642174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766003050205613720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3da8ee4-35f6-4903-a884-11f13a0b2506 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.206644973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e00d3353-8aa8-4b77-9e78-0819b0c1fee1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.206736210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e00d3353-8aa8-4b77-9e78-0819b0c1fee1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.207585272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766003033780682122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a7762280f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766003033796784692,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766003028951678240,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b
0c,State:CONTAINER_RUNNING,CreatedAt:1766003028960946916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766003028958737572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766003028929046639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a776228
0f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766003003475507590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766003004534112897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766003003541941060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766003003409790971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash:
5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766003003376876821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766003003277513316,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e00d3353-8aa8-4b77-9e78-0819b0c1fee1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.254605555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47665cbb-0e11-4b63-a143-0802b771b92a name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.254923111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47665cbb-0e11-4b63-a143-0802b771b92a name=/runtime.v1.RuntimeService/Version
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.256540529Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d11e399-a528-4298-be85-9ba5ded21490 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.257804070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766003050257778219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d11e399-a528-4298-be85-9ba5ded21490 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.259578131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccae6abd-59d4-49ca-9b78-5bbd880b3e20 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.259811874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccae6abd-59d4-49ca-9b78-5bbd880b3e20 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 20:24:10 pause-722044 crio[2801]: time="2025-12-17 20:24:10.260816771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766003033780682122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a7762280f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766003033796784692,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766003028951678240,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b
0c,State:CONTAINER_RUNNING,CreatedAt:1766003028960946916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766003028958737572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766003028929046639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17,PodSandboxId:b11ff73723458d52c2f72fd01fd3275b4049e56a776228
0f6ac2dea187dd801e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766003003475507590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snthq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24049acb-98c2-425b-b662-917a0f36e924,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd,PodSandboxId:a8aeaab42259b6fb3f701143a7ae8d45a4d3adf27ce74e8494720310469c642b,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766003004534112897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7grrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7659c433-1b61-45dd-a6ee-14007a1efcda,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b,PodSandboxId:f209a2779d795938af8709dd341c6402f8d38f6ff06dcc6947169c801e3945ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766003003541941060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a6a12fb84ad9e69321c2dfd275a193c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506,PodSandboxId:d16b6434c38711eb995045ab894c68c5184fb0c3e6ac82dcc155e49b7b245df0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766003003409790971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21d1fafe57d24c490b66f74eb1a9a96,},Annotations:map[string]string{io.kubernetes.container.hash:
5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb,PodSandboxId:97505412ecef1fbc01c623b497b45b3df1bbb0cc01a790811cffe6a8baebf71e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766003003376876821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-722044,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: eb726f0e9635b0cbfd43d7e7b7eb9dbe,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9,PodSandboxId:cb06495b9771e523f9c986a0a0c266640e41da22723c8ca23ddd0059786583ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766003003277513316,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-722044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8b514e6b67d9e22e45fb706e56aae,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccae6abd-59d4-49ca-9b78-5bbd880b3e20 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	c04f950f167c7       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   16 seconds ago      Running             kube-proxy                2                   b11ff73723458       kube-proxy-snthq                       kube-system
	569ce5bc0074a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   2                   a8aeaab42259b       coredns-66bc5c9577-7grrd               kube-system
	7faab952af369       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   21 seconds ago      Running             kube-apiserver            2                   f209a2779d795       kube-apiserver-pause-722044            kube-system
	e2c6ecbb072a4       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   21 seconds ago      Running             kube-controller-manager   2                   97505412ecef1       kube-controller-manager-pause-722044   kube-system
	ff6cf0abbf3f6       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   21 seconds ago      Running             kube-scheduler            2                   cb06495b9771e       kube-scheduler-pause-722044            kube-system
	77f4487b3cdf8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   21 seconds ago      Running             etcd                      2                   d16b6434c3871       etcd-pause-722044                      kube-system
	3148e9a334330       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   45 seconds ago      Exited              coredns                   1                   a8aeaab42259b       coredns-66bc5c9577-7grrd               kube-system
	00a2e25f105f3       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   46 seconds ago      Exited              kube-apiserver            1                   f209a2779d795       kube-apiserver-pause-722044            kube-system
	288a092a80120       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   46 seconds ago      Exited              kube-proxy                1                   b11ff73723458       kube-proxy-snthq                       kube-system
	1dbeb3a9804d1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   46 seconds ago      Exited              etcd                      1                   d16b6434c3871       etcd-pause-722044                      kube-system
	96157b431de6b       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   46 seconds ago      Exited              kube-controller-manager   1                   97505412ecef1       kube-controller-manager-pause-722044   kube-system
	eb92e68f0672b       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   47 seconds ago      Exited              kube-scheduler            1                   cb06495b9771e       kube-scheduler-pause-722044            kube-system
	
	
	==> coredns [3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:55174 - 23645 "HINFO IN 466270137555793141.2631611240893111981. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027213015s
	
	
	==> coredns [569ce5bc0074ae2269067a48b2c7bdcb79c7ec4420e657f9b3d29469cf75803e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48732 - 46731 "HINFO IN 729494661844392183.2324542142382827573. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026035134s
	
	
	==> describe nodes <==
	Name:               pause-722044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-722044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=pause-722044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_22_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:22:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-722044
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:24:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:23:52 +0000   Wed, 17 Dec 2025 20:22:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:23:52 +0000   Wed, 17 Dec 2025 20:22:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:23:52 +0000   Wed, 17 Dec 2025 20:22:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:23:52 +0000   Wed, 17 Dec 2025 20:22:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.108
	  Hostname:    pause-722044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 80d5baeedfdb460f88dada4fa0f98d05
	  System UUID:                80d5baee-dfdb-460f-88da-da4fa0f98d05
	  Boot ID:                    0a5a2d07-0736-4b0f-aade-295cd6926e33
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7grrd                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     73s
	  kube-system                 etcd-pause-722044                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         78s
	  kube-system                 kube-apiserver-pause-722044             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-pause-722044    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-snthq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-pause-722044             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 71s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 42s                kube-proxy       
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-722044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-722044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-722044 status is now: NodeHasSufficientPID
	  Normal  NodeReady                77s                kubelet          Node pause-722044 status is now: NodeReady
	  Normal  RegisteredNode           74s                node-controller  Node pause-722044 event: Registered Node pause-722044 in Controller
	  Normal  RegisteredNode           39s                node-controller  Node pause-722044 event: Registered Node pause-722044 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-722044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-722044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-722044 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-722044 event: Registered Node pause-722044 in Controller
	
	
	==> dmesg <==
	[Dec17 20:22] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001631] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005825] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.184291] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083334] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.117723] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.182491] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.259984] kauditd_printk_skb: 18 callbacks suppressed
	[Dec17 20:23] kauditd_printk_skb: 219 callbacks suppressed
	[  +0.105331] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.623003] kauditd_printk_skb: 252 callbacks suppressed
	[  +7.263217] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.205967] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.617479] kauditd_printk_skb: 83 callbacks suppressed
	
	
	==> etcd [1dbeb3a9804d156cf89fd691a0bb1fbcc3a6a27f4878706712834561cd962506] <==
	{"level":"warn","ts":"2025-12-17T20:23:27.052902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.062522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.073304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.082946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.090549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.103785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:27.163623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43892","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T20:23:45.512949Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T20:23:45.513086Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-722044","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.108:2380"],"advertise-client-urls":["https://192.168.61.108:2379"]}
	{"level":"error","ts":"2025-12-17T20:23:45.513264Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T20:23:45.515610Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T20:23:45.517151Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T20:23:45.517288Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.108:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T20:23:45.517447Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.108:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T20:23:45.517444Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T20:23:45.517462Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.108:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T20:23:45.517465Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T20:23:45.517477Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T20:23:45.517481Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7f161a451982983d","current-leader-member-id":"7f161a451982983d"}
	{"level":"info","ts":"2025-12-17T20:23:45.517631Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T20:23:45.517649Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-17T20:23:45.521467Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.61.108:2380"}
	{"level":"error","ts":"2025-12-17T20:23:45.521561Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.108:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T20:23:45.521608Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.61.108:2380"}
	{"level":"info","ts":"2025-12-17T20:23:45.521620Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-722044","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.108:2380"],"advertise-client-urls":["https://192.168.61.108:2379"]}
	
	
	==> etcd [77f4487b3cdf857c8a5674baf1be7a012598267474c78df167bb34a781f1469c] <==
	{"level":"warn","ts":"2025-12-17T20:23:51.668271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.679352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.697066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.715028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.735834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.756357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.771226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.791836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:23:51.909748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T20:24:02.979645Z","caller":"traceutil/trace.go:172","msg":"trace[1810309793] linearizableReadLoop","detail":"{readStateIndex:599; appliedIndex:599; }","duration":"243.12447ms","start":"2025-12-17T20:24:02.736503Z","end":"2025-12-17T20:24:02.979627Z","steps":["trace[1810309793] 'read index received'  (duration: 243.120417ms)","trace[1810309793] 'applied index is now lower than readState.Index'  (duration: 3.525µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:02.979801Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"243.291024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T20:24:02.979853Z","caller":"traceutil/trace.go:172","msg":"trace[1907126338] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:554; }","duration":"243.367522ms","start":"2025-12-17T20:24:02.736478Z","end":"2025-12-17T20:24:02.979846Z","steps":["trace[1907126338] 'agreement among raft nodes before linearized reading'  (duration: 243.26366ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:24:02.981027Z","caller":"traceutil/trace.go:172","msg":"trace[565004326] transaction","detail":"{read_only:false; response_revision:555; number_of_response:1; }","duration":"282.183004ms","start":"2025-12-17T20:24:02.698828Z","end":"2025-12-17T20:24:02.981011Z","steps":["trace[565004326] 'process raft request'  (duration: 281.181971ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:24:03.508474Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.25823ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10970094889143209044 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" mod_revision:555 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" value_size:4777 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T20:24:03.508565Z","caller":"traceutil/trace.go:172","msg":"trace[1076540448] transaction","detail":"{read_only:false; response_revision:556; number_of_response:1; }","duration":"515.232148ms","start":"2025-12-17T20:24:02.993323Z","end":"2025-12-17T20:24:03.508556Z","steps":["trace[1076540448] 'process raft request'  (duration: 378.497506ms)","trace[1076540448] 'compare'  (duration: 136.028147ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:03.508609Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:24:02.993304Z","time spent":"515.284012ms","remote":"127.0.0.1:56558","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4839,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" mod_revision:555 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" value_size:4777 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-722044\" > >"}
	{"level":"info","ts":"2025-12-17T20:24:03.916235Z","caller":"traceutil/trace.go:172","msg":"trace[924582882] linearizableReadLoop","detail":"{readStateIndex:601; appliedIndex:601; }","duration":"496.513762ms","start":"2025-12-17T20:24:03.419637Z","end":"2025-12-17T20:24:03.916150Z","steps":["trace[924582882] 'read index received'  (duration: 496.485734ms)","trace[924582882] 'applied index is now lower than readState.Index'  (duration: 27.229µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:04.028903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"609.25853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-722044\" limit:1 ","response":"range_response_count:1 size:6083"}
	{"level":"info","ts":"2025-12-17T20:24:04.029121Z","caller":"traceutil/trace.go:172","msg":"trace[1046993475] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-722044; range_end:; response_count:1; response_revision:556; }","duration":"609.472948ms","start":"2025-12-17T20:24:03.419633Z","end":"2025-12-17T20:24:04.029106Z","steps":["trace[1046993475] 'agreement among raft nodes before linearized reading'  (duration: 496.680005ms)","trace[1046993475] 'range keys from in-memory index tree'  (duration: 112.447609ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:04.028906Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.736085ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10970094889143209045 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" mod_revision:488 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T20:24:04.029538Z","caller":"traceutil/trace.go:172","msg":"trace[745654762] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"826.866533ms","start":"2025-12-17T20:24:03.202661Z","end":"2025-12-17T20:24:04.029528Z","steps":["trace[745654762] 'process raft request'  (duration: 826.800949ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:24:04.029613Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:24:03.202644Z","time spent":"826.923329ms","remote":"127.0.0.1:56706","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":536,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-722044\" mod_revision:487 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-722044\" value_size:483 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-722044\" > >"}
	{"level":"info","ts":"2025-12-17T20:24:04.029764Z","caller":"traceutil/trace.go:172","msg":"trace[551268511] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"959.454103ms","start":"2025-12-17T20:24:03.070285Z","end":"2025-12-17T20:24:04.029739Z","steps":["trace[551268511] 'process raft request'  (duration: 845.829481ms)","trace[551268511] 'compare'  (duration: 112.649256ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:24:04.029895Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:24:03.419619Z","time spent":"609.534154ms","remote":"127.0.0.1:56558","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":6105,"request content":"key:\"/registry/pods/kube-system/etcd-pause-722044\" limit:1 "}
	{"level":"warn","ts":"2025-12-17T20:24:04.029928Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:24:03.070104Z","time spent":"959.749238ms","remote":"127.0.0.1:56706","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" mod_revision:488 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-v25ynsvno6l4fgrhyszhs7l25a\" > >"}
	
	
	==> kernel <==
	 20:24:10 up 1 min,  0 users,  load average: 0.88, 0.38, 0.14
	Linux pause-722044 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [00a2e25f105f3f6ead22ad8dc039761969eb90ee1fbadaddcc49dfff31ea179b] <==
	I1217 20:23:35.282600       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1217 20:23:35.282609       1 controller.go:170] Shutting down OpenAPI controller
	I1217 20:23:35.282864       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I1217 20:23:35.282877       1 cluster_authentication_trust_controller.go:482] Shutting down cluster_authentication_trust_controller controller
	I1217 20:23:35.282886       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	I1217 20:23:35.282904       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1217 20:23:35.282943       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1217 20:23:35.284615       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 20:23:35.284713       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 20:23:35.286651       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 20:23:35.286713       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 20:23:35.285436       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1217 20:23:35.285464       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1217 20:23:35.287619       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1217 20:23:35.285537       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1217 20:23:35.287889       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 20:23:35.285529       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1217 20:23:35.285559       1 controller.go:157] Shutting down quota evaluator
	I1217 20:23:35.288869       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.286553       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1217 20:23:35.288948       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.288974       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.288991       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.289005       1 controller.go:176] quota evaluator worker shutdown
	I1217 20:23:35.286637       1 secure_serving.go:259] Stopped listening on [::]:8443
	
	
	==> kube-apiserver [7faab952af369c929f10b4a2e7f271a9e14e5d78e2069f149ca70ff8b9e4b904] <==
	I1217 20:23:52.853899       1 policy_source.go:240] refreshing policies
	I1217 20:23:52.854842       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:23:52.854904       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:23:52.854945       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 20:23:52.855061       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 20:23:52.855149       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 20:23:52.871829       1 aggregator.go:171] initial CRD sync complete...
	I1217 20:23:52.872024       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:23:52.872136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:23:52.872241       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:23:52.873572       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:23:52.871882       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:23:52.928567       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:23:52.928894       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:23:52.931343       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:23:53.522503       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:23:53.646839       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1217 20:23:54.275705       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.108]
	I1217 20:23:54.277468       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:23:54.288358       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:23:54.839055       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:23:54.932223       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 20:23:55.008727       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:23:55.017646       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:23:57.448426       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [96157b431de6bce3a1d7a6419fab455461481cafe1319056a3a0efa2e1c076cb] <==
	I1217 20:23:31.163701       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 20:23:31.163715       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 20:23:31.165076       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 20:23:31.165120       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 20:23:31.165447       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 20:23:31.166669       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 20:23:31.166829       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 20:23:31.168235       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 20:23:31.169375       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 20:23:31.171712       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:23:31.171761       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:23:31.174081       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 20:23:31.174212       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:23:31.175388       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 20:23:31.196678       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 20:23:31.196743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:23:31.200021       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 20:23:31.208383       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 20:23:31.209649       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 20:23:31.212972       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 20:23:31.213024       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 20:23:31.213776       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 20:23:31.214153       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 20:23:31.214241       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:23:31.216106       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-controller-manager [e2c6ecbb072a467a63706401a7d5c001a661e32653ead95d8e11bf023908b7e0] <==
	I1217 20:23:56.161240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 20:23:56.164017       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 20:23:56.166774       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 20:23:56.169121       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 20:23:56.170886       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 20:23:56.176090       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 20:23:56.176554       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 20:23:56.177265       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-722044"
	I1217 20:23:56.177531       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 20:23:56.180553       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 20:23:56.182717       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:23:56.182765       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 20:23:56.183484       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 20:23:56.183894       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:23:56.184996       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 20:23:56.185369       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:23:56.186256       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 20:23:56.186824       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 20:23:56.190880       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 20:23:56.191029       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:23:56.191234       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 20:23:56.191241       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 20:23:56.197506       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 20:23:56.197516       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 20:23:56.202548       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-proxy [288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17] <==
	I1217 20:23:25.948130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:23:27.892904       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:23:27.893039       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.108"]
	E1217 20:23:27.893155       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:23:28.137499       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 20:23:28.137589       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 20:23:28.137621       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:23:28.165753       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:23:28.171091       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:23:28.171261       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:23:28.175615       1 config.go:200] "Starting service config controller"
	I1217 20:23:28.175995       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:23:28.176131       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:23:28.176144       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:23:28.176626       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:23:28.176805       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:23:28.182606       1 config.go:309] "Starting node config controller"
	I1217 20:23:28.182678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:23:28.182687       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:23:28.277244       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:23:28.277279       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:23:28.277304       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c04f950f167c76f20d1fcacf0e4feea0a624a6b0f25513076e6d8ee564a481bd] <==
	I1217 20:23:54.176826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:23:54.280292       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:23:54.280368       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.108"]
	E1217 20:23:54.280436       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:23:54.338119       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 20:23:54.338283       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 20:23:54.338337       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:23:54.351701       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:23:54.352088       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:23:54.352131       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:23:54.358961       1 config.go:200] "Starting service config controller"
	I1217 20:23:54.359002       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:23:54.359030       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:23:54.359035       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:23:54.359051       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:23:54.359056       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:23:54.360087       1 config.go:309] "Starting node config controller"
	I1217 20:23:54.360125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:23:54.360133       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:23:54.459375       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:23:54.459404       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:23:54.459423       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eb92e68f0672b340c271cd39f375352462fbcd4af3068254b0220df4d2490ea9] <==
	I1217 20:23:25.636160       1 serving.go:386] Generated self-signed cert in-memory
	W1217 20:23:27.854593       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:23:27.854633       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 20:23:27.854666       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:23:27.854680       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:23:27.894521       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 20:23:27.894614       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:23:27.897232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:27.897961       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:27.897975       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:23:27.898055       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:23:27.999493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:45.804886       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 20:23:45.804938       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 20:23:45.804981       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 20:23:45.805108       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff6cf0abbf3f6fda28f66d7fe150b08ac01b68f812fe75463e81ccdddfd15dc6] <==
	I1217 20:23:50.242871       1 serving.go:386] Generated self-signed cert in-memory
	W1217 20:23:52.754973       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:23:52.755015       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 20:23:52.755026       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:23:52.755032       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:23:52.824147       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 20:23:52.824298       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:23:52.832375       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:52.832522       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:23:52.833264       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:23:52.834629       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 20:23:52.848025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 20:23:52.934099       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.068556    3955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-722044\" not found" node="pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.686604    3955 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-722044\" not found" node="pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.763109    3955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.917479    3955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-722044\" already exists" pod="kube-system/etcd-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.917587    3955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.919840    3955 kubelet_node_status.go:124] "Node was previously registered" node="pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.920339    3955 kubelet_node_status.go:78] "Successfully registered node" node="pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.920579    3955 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.923757    3955 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.938341    3955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-722044\" already exists" pod="kube-system/kube-apiserver-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.938448    3955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.954906    3955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-722044\" already exists" pod="kube-system/kube-controller-manager-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: I1217 20:23:52.954946    3955 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-722044"
	Dec 17 20:23:52 pause-722044 kubelet[3955]: E1217 20:23:52.972059    3955 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-722044\" already exists" pod="kube-system/kube-scheduler-pause-722044"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.424312    3955 apiserver.go:52] "Watching apiserver"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.464420    3955 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.519342    3955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24049acb-98c2-425b-b662-917a0f36e924-xtables-lock\") pod \"kube-proxy-snthq\" (UID: \"24049acb-98c2-425b-b662-917a0f36e924\") " pod="kube-system/kube-proxy-snthq"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.519395    3955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24049acb-98c2-425b-b662-917a0f36e924-lib-modules\") pod \"kube-proxy-snthq\" (UID: \"24049acb-98c2-425b-b662-917a0f36e924\") " pod="kube-system/kube-proxy-snthq"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.734504    3955 scope.go:117] "RemoveContainer" containerID="3148e9a334330e99df200e53da3dd65a4273e1bd7bfe071c3d5a2ab5babc79cd"
	Dec 17 20:23:53 pause-722044 kubelet[3955]: I1217 20:23:53.734850    3955 scope.go:117] "RemoveContainer" containerID="288a092a801206a702bf6578ae6b07e6044aead431acb0edbc60cf72d5ca3b17"
	Dec 17 20:23:57 pause-722044 kubelet[3955]: I1217 20:23:57.410149    3955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 20:23:58 pause-722044 kubelet[3955]: E1217 20:23:58.603675    3955 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766003038602446313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 17 20:23:58 pause-722044 kubelet[3955]: E1217 20:23:58.603704    3955 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766003038602446313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 17 20:24:08 pause-722044 kubelet[3955]: E1217 20:24:08.606405    3955 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766003048605770866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 17 20:24:08 pause-722044 kubelet[3955]: E1217 20:24:08.606429    3955 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766003048605770866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-722044 -n pause-722044
helpers_test.go:270: (dbg) Run:  kubectl --context pause-722044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (60.08s)

                                                
                                    

Test pass (364/424)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 25.49
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.3/json-events 11.24
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.08
18 TestDownloadOnly/v1.34.3/DeleteAll 0.16
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-rc.1/json-events 12.69
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.16
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.65
31 TestOffline 97.16
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 137.75
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 12.57
44 TestAddons/parallel/Registry 20.44
45 TestAddons/parallel/RegistryCreds 0.71
47 TestAddons/parallel/InspektorGadget 12.21
48 TestAddons/parallel/MetricsServer 7.19
50 TestAddons/parallel/CSI 43.15
51 TestAddons/parallel/Headlamp 23.22
52 TestAddons/parallel/CloudSpanner 5.59
53 TestAddons/parallel/LocalPath 61.19
54 TestAddons/parallel/NvidiaDevicePlugin 7
55 TestAddons/parallel/Yakd 10.97
57 TestAddons/StoppedEnableDisable 89.46
58 TestCertOptions 70.13
59 TestCertExpiration 294.27
61 TestForceSystemdFlag 100.8
62 TestForceSystemdEnv 90.82
67 TestErrorSpam/setup 38.56
68 TestErrorSpam/start 0.34
69 TestErrorSpam/status 0.69
70 TestErrorSpam/pause 1.59
71 TestErrorSpam/unpause 1.8
72 TestErrorSpam/stop 5.31
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 89.42
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 31.25
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.27
84 TestFunctional/serial/CacheCmd/cache/add_local 2.26
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 35.85
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.29
95 TestFunctional/serial/LogsFileCmd 1.29
96 TestFunctional/serial/InvalidService 4.91
98 TestFunctional/parallel/ConfigCmd 0.39
99 TestFunctional/parallel/DashboardCmd 13.33
100 TestFunctional/parallel/DryRun 0.23
101 TestFunctional/parallel/InternationalLanguage 0.11
102 TestFunctional/parallel/StatusCmd 0.75
106 TestFunctional/parallel/ServiceCmdConnect 17.48
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 42.87
110 TestFunctional/parallel/SSHCmd 0.35
111 TestFunctional/parallel/CpCmd 1.14
112 TestFunctional/parallel/MySQL 35.32
113 TestFunctional/parallel/FileSync 0.16
114 TestFunctional/parallel/CertSync 0.96
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
122 TestFunctional/parallel/License 0.38
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.23
124 TestFunctional/parallel/Version/short 0.08
125 TestFunctional/parallel/Version/components 0.59
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.37
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
130 TestFunctional/parallel/ImageCommands/ImageBuild 6.01
131 TestFunctional/parallel/ImageCommands/Setup 1.97
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.13
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.79
140 TestFunctional/parallel/ServiceCmd/List 0.3
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
143 TestFunctional/parallel/ServiceCmd/Format 0.31
144 TestFunctional/parallel/ServiceCmd/URL 0.3
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.56
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 6.92
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
148 TestFunctional/parallel/ProfileCmd/profile_list 0.34
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
150 TestFunctional/parallel/MountCmd/any-port 11.06
151 TestFunctional/parallel/MountCmd/specific-port 1.62
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.12
162 TestFunctional/delete_echo-server_images 0.03
163 TestFunctional/delete_my-image_image 0.01
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 81.52
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 30.44
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.11
176 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.28
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 2.19
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.53
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 34.51
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.29
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.28
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.66
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.41
192 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 21.53
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.21
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.11
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 0.66
199 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 25.47
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.15
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 39.65
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.35
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.09
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 31.11
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.18
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.12
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.06
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.34
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.44
216 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.07
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.07
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.07
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.22
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.28
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.22
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.25
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 6.52
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.93
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.73
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.33
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.4
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.31
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.35
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.84
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 2.53
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.76
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 15.29
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 8.82
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.9
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 1.2
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.31
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.26
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.28
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.39
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.28
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 213.86
262 TestMultiControlPlane/serial/DeployApp 7.25
263 TestMultiControlPlane/serial/PingHostFromPods 1.3
264 TestMultiControlPlane/serial/AddWorkerNode 44.75
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
267 TestMultiControlPlane/serial/CopyFile 10.73
268 TestMultiControlPlane/serial/StopSecondaryNode 84.85
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
270 TestMultiControlPlane/serial/RestartSecondaryNode 32.5
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 378.87
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.14
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
275 TestMultiControlPlane/serial/StopCluster 257.12
276 TestMultiControlPlane/serial/RestartCluster 86.27
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
278 TestMultiControlPlane/serial/AddSecondaryNode 82.84
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.66
284 TestJSONOutput/start/Command 82.87
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.74
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.63
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 7.43
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.22
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 79.44
316 TestMountStart/serial/StartWithMountFirst 20.6
320 TestMultiNode/serial/FreshStart2Nodes 100.23
321 TestMultiNode/serial/DeployApp2Nodes 6.28
322 TestMultiNode/serial/PingHostFrom2Pods 0.84
323 TestMultiNode/serial/AddNode 42.23
324 TestMultiNode/serial/MultiNodeLabels 0.07
325 TestMultiNode/serial/ProfileList 0.46
326 TestMultiNode/serial/CopyFile 5.87
327 TestMultiNode/serial/StopNode 2.29
328 TestMultiNode/serial/StartAfterStop 40.84
329 TestMultiNode/serial/RestartKeepsNodes 285.28
330 TestMultiNode/serial/DeleteNode 2.64
331 TestMultiNode/serial/StopMultiNode 170.75
332 TestMultiNode/serial/RestartMultiNode 90.76
333 TestMultiNode/serial/ValidateNameConflict 38.38
340 TestScheduledStopUnix 108.14
344 TestRunningBinaryUpgrade 394.54
346 TestKubernetesUpgrade 130.78
349 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
352 TestISOImage/Setup 19.27
354 TestNoKubernetes/serial/StartWithK8s 78.35
359 TestNetworkPlugins/group/false 3.36
364 TestISOImage/Binaries/crictl 0.17
365 TestISOImage/Binaries/curl 0.19
366 TestISOImage/Binaries/docker 0.18
367 TestISOImage/Binaries/git 0.17
368 TestISOImage/Binaries/iptables 0.16
369 TestISOImage/Binaries/podman 0.17
370 TestISOImage/Binaries/rsync 0.16
371 TestISOImage/Binaries/socat 0.17
372 TestISOImage/Binaries/wget 0.17
373 TestISOImage/Binaries/VBoxControl 0.17
374 TestISOImage/Binaries/VBoxService 0.17
375 TestNoKubernetes/serial/StartWithStopK8s 32.35
376 TestNoKubernetes/serial/Start 59.98
377 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
378 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
379 TestNoKubernetes/serial/ProfileList 1.08
380 TestNoKubernetes/serial/Stop 1.33
381 TestNoKubernetes/serial/StartNoArgs 56.74
382 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
383 TestStoppedBinaryUpgrade/Setup 4.06
384 TestStoppedBinaryUpgrade/Upgrade 85.7
393 TestPause/serial/Start 63.23
394 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
395 TestNetworkPlugins/group/auto/Start 81.67
397 TestNetworkPlugins/group/kindnet/Start 64.11
398 TestNetworkPlugins/group/auto/KubeletFlags 0.21
399 TestNetworkPlugins/group/auto/NetCatPod 12.27
400 TestNetworkPlugins/group/auto/DNS 0.17
401 TestNetworkPlugins/group/auto/Localhost 0.13
402 TestNetworkPlugins/group/auto/HairPin 0.14
403 TestNetworkPlugins/group/calico/Start 75.83
404 TestNetworkPlugins/group/custom-flannel/Start 92.23
405 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
406 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
407 TestNetworkPlugins/group/kindnet/NetCatPod 11.31
408 TestNetworkPlugins/group/kindnet/DNS 0.22
409 TestNetworkPlugins/group/kindnet/Localhost 0.19
410 TestNetworkPlugins/group/kindnet/HairPin 0.17
411 TestNetworkPlugins/group/enable-default-cni/Start 88.76
412 TestNetworkPlugins/group/calico/ControllerPod 6.01
413 TestNetworkPlugins/group/calico/KubeletFlags 0.18
414 TestNetworkPlugins/group/calico/NetCatPod 11.25
415 TestNetworkPlugins/group/flannel/Start 79.82
416 TestNetworkPlugins/group/calico/DNS 0.19
417 TestNetworkPlugins/group/calico/Localhost 0.14
418 TestNetworkPlugins/group/calico/HairPin 0.18
419 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
420 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.6
421 TestNetworkPlugins/group/bridge/Start 90.26
422 TestNetworkPlugins/group/custom-flannel/DNS 0.17
423 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
424 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
426 TestStartStop/group/old-k8s-version/serial/FirstStart 99.42
427 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
428 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.24
429 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
430 TestNetworkPlugins/group/enable-default-cni/Localhost 0.77
431 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
432 TestNetworkPlugins/group/flannel/ControllerPod 6.01
433 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
434 TestNetworkPlugins/group/flannel/NetCatPod 10.27
436 TestStartStop/group/no-preload/serial/FirstStart 100.32
437 TestNetworkPlugins/group/flannel/DNS 0.17
438 TestNetworkPlugins/group/flannel/Localhost 0.14
439 TestNetworkPlugins/group/flannel/HairPin 0.14
441 TestStartStop/group/embed-certs/serial/FirstStart 83.09
442 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
443 TestNetworkPlugins/group/bridge/NetCatPod 11.26
444 TestNetworkPlugins/group/bridge/DNS 0.2
445 TestNetworkPlugins/group/bridge/Localhost 0.15
446 TestNetworkPlugins/group/bridge/HairPin 0.21
448 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.02
449 TestStartStop/group/old-k8s-version/serial/DeployApp 11.45
450 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.42
451 TestStartStop/group/old-k8s-version/serial/Stop 73.77
452 TestStartStop/group/no-preload/serial/DeployApp 11.29
453 TestStartStop/group/embed-certs/serial/DeployApp 12.29
454 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
455 TestStartStop/group/no-preload/serial/Stop 88.21
456 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
457 TestStartStop/group/embed-certs/serial/Stop 80.49
458 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
459 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
460 TestStartStop/group/old-k8s-version/serial/SecondStart 47.71
461 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
462 TestStartStop/group/default-k8s-diff-port/serial/Stop 82.88
463 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
464 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
465 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
466 TestStartStop/group/embed-certs/serial/SecondStart 44.7
467 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
468 TestStartStop/group/old-k8s-version/serial/Pause 2.81
469 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
470 TestStartStop/group/no-preload/serial/SecondStart 74.71
472 TestStartStop/group/newest-cni/serial/FirstStart 74.32
473 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
474 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 72.35
475 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
476 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
477 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
478 TestStartStop/group/embed-certs/serial/Pause 3.51
480 TestISOImage/PersistentMounts//data 0.22
481 TestISOImage/PersistentMounts//var/lib/docker 0.2
482 TestISOImage/PersistentMounts//var/lib/cni 0.18
483 TestISOImage/PersistentMounts//var/lib/kubelet 0.18
484 TestISOImage/PersistentMounts//var/lib/minikube 0.21
485 TestISOImage/PersistentMounts//var/lib/toolbox 0.2
486 TestISOImage/PersistentMounts//var/lib/boot2docker 0.22
487 TestISOImage/VersionJSON 0.38
488 TestISOImage/eBPFSupport 0.2
489 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.02
490 TestStartStop/group/newest-cni/serial/DeployApp 0
491 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.06
492 TestStartStop/group/newest-cni/serial/Stop 82.49
493 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
494 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
495 TestStartStop/group/no-preload/serial/Pause 2.65
496 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
497 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
498 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
499 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.46
500 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
501 TestStartStop/group/newest-cni/serial/SecondStart 30.11
502 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
503 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
504 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
505 TestStartStop/group/newest-cni/serial/Pause 3.26
x
+
TestDownloadOnly/v1.28.0/json-events (25.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-396191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-396191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.492459149s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (25.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 19:20:30.944275    7531 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 19:20:30.944379    7531 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-396191
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-396191: exit status 85 (81.750915ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-396191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-396191 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:20:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:20:05.506876    7543 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:20:05.508007    7543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:05.508037    7543 out.go:374] Setting ErrFile to fd 2...
	I1217 19:20:05.508045    7543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:05.508743    7543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	W1217 19:20:05.508916    7543 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22186-3611/.minikube/config/config.json: open /home/jenkins/minikube-integration/22186-3611/.minikube/config/config.json: no such file or directory
	I1217 19:20:05.509491    7543 out.go:368] Setting JSON to true
	I1217 19:20:05.510367    7543 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":144,"bootTime":1765999061,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:20:05.510484    7543 start.go:143] virtualization: kvm guest
	I1217 19:20:05.515500    7543 out.go:99] [download-only-396191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1217 19:20:05.515723    7543 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 19:20:05.515740    7543 notify.go:221] Checking for updates...
	I1217 19:20:05.517549    7543 out.go:171] MINIKUBE_LOCATION=22186
	I1217 19:20:05.519164    7543 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:20:05.520722    7543 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:20:05.522373    7543 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:20:05.523995    7543 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 19:20:05.526927    7543 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 19:20:05.527239    7543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:20:06.058895    7543 out.go:99] Using the kvm2 driver based on user configuration
	I1217 19:20:06.058942    7543 start.go:309] selected driver: kvm2
	I1217 19:20:06.058949    7543 start.go:927] validating driver "kvm2" against <nil>
	I1217 19:20:06.059346    7543 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:20:06.059998    7543 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 19:20:06.060156    7543 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:20:06.060187    7543 cni.go:84] Creating CNI manager for ""
	I1217 19:20:06.060236    7543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 19:20:06.060246    7543 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 19:20:06.060283    7543 start.go:353] cluster config:
	{Name:download-only-396191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-396191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:20:06.060513    7543 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:20:06.062425    7543 out.go:99] Downloading VM boot image ...
	I1217 19:20:06.062480    7543 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1217 19:20:17.477862    7543 out.go:99] Starting "download-only-396191" primary control-plane node in "download-only-396191" cluster
	I1217 19:20:17.477902    7543 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 19:20:17.588624    7543 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 19:20:17.588655    7543 cache.go:65] Caching tarball of preloaded images
	I1217 19:20:17.588871    7543 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 19:20:17.590895    7543 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 19:20:17.590920    7543 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 19:20:17.705893    7543 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1217 19:20:17.706016    7543 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 19:20:29.873472    7543 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1217 19:20:29.873887    7543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/download-only-396191/config.json ...
	I1217 19:20:29.873926    7543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/download-only-396191/config.json: {Name:mk90883b6e134de6fd3ade506da8f933903b4261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:20:29.874119    7543 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 19:20:29.874347    7543 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22186-3611/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-396191 host does not exist
	  To start a cluster, run: "minikube start -p download-only-396191"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-396191
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (11.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-605458 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-605458 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.241595579s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (11.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1217 19:20:42.579568    7531 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 19:20:42.579599    7531 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-605458
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-605458: exit status 85 (75.727034ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-396191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-396191 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ delete  │ -p download-only-396191                                                                                                                                                 │ download-only-396191 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ start   │ -o=json --download-only -p download-only-605458 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-605458 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:20:31
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:20:31.392708    7803 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:20:31.392994    7803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:31.393005    7803 out.go:374] Setting ErrFile to fd 2...
	I1217 19:20:31.393010    7803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:31.393233    7803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:20:31.393738    7803 out.go:368] Setting JSON to true
	I1217 19:20:31.394604    7803 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":170,"bootTime":1765999061,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:20:31.394673    7803 start.go:143] virtualization: kvm guest
	I1217 19:20:31.396980    7803 out.go:99] [download-only-605458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:20:31.397122    7803 notify.go:221] Checking for updates...
	I1217 19:20:31.398554    7803 out.go:171] MINIKUBE_LOCATION=22186
	I1217 19:20:31.399851    7803 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:20:31.401110    7803 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:20:31.402201    7803 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:20:31.404425    7803 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 19:20:31.406678    7803 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 19:20:31.406894    7803 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:20:31.441727    7803 out.go:99] Using the kvm2 driver based on user configuration
	I1217 19:20:31.441768    7803 start.go:309] selected driver: kvm2
	I1217 19:20:31.441777    7803 start.go:927] validating driver "kvm2" against <nil>
	I1217 19:20:31.442123    7803 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:20:31.442676    7803 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 19:20:31.442854    7803 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:20:31.442885    7803 cni.go:84] Creating CNI manager for ""
	I1217 19:20:31.442950    7803 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 19:20:31.442967    7803 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 19:20:31.443042    7803 start.go:353] cluster config:
	{Name:download-only-605458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-605458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:20:31.443171    7803 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:20:31.444815    7803 out.go:99] Starting "download-only-605458" primary control-plane node in "download-only-605458" cluster
	I1217 19:20:31.444850    7803 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:20:31.966043    7803 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 19:20:31.966095    7803 cache.go:65] Caching tarball of preloaded images
	I1217 19:20:31.966320    7803 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:20:31.968431    7803 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1217 19:20:31.968463    7803 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 19:20:32.078064    7803 preload.go:295] Got checksum from GCS API "fdea575627999e8631bb8fa579d884c7"
	I1217 19:20:32.078108    7803 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:fdea575627999e8631bb8fa579d884c7 -> /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-605458 host does not exist
	  To start a cluster, run: "minikube start -p download-only-605458"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-605458
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (12.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-238357 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-238357 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.687062916s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (12.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1217 19:20:55.656701    7531 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1217 19:20:55.656736    7531 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-238357
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-238357: exit status 85 (73.861788ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-396191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-396191 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ delete  │ -p download-only-396191                                                                                                                                                      │ download-only-396191 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ start   │ -o=json --download-only -p download-only-605458 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-605458 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ delete  │ -p download-only-605458                                                                                                                                                      │ download-only-605458 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ start   │ -o=json --download-only -p download-only-238357 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-238357 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:20:43
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:20:43.026019    8014 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:20:43.026131    8014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:43.026140    8014 out.go:374] Setting ErrFile to fd 2...
	I1217 19:20:43.026144    8014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:43.026354    8014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:20:43.026839    8014 out.go:368] Setting JSON to true
	I1217 19:20:43.027706    8014 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":182,"bootTime":1765999061,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:20:43.027774    8014 start.go:143] virtualization: kvm guest
	I1217 19:20:43.029880    8014 out.go:99] [download-only-238357] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:20:43.030028    8014 notify.go:221] Checking for updates...
	I1217 19:20:43.031465    8014 out.go:171] MINIKUBE_LOCATION=22186
	I1217 19:20:43.032995    8014 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:20:43.034200    8014 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:20:43.035491    8014 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:20:43.036721    8014 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 19:20:43.038884    8014 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 19:20:43.039185    8014 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:20:43.070569    8014 out.go:99] Using the kvm2 driver based on user configuration
	I1217 19:20:43.070618    8014 start.go:309] selected driver: kvm2
	I1217 19:20:43.070625    8014 start.go:927] validating driver "kvm2" against <nil>
	I1217 19:20:43.070963    8014 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:20:43.071499    8014 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 19:20:43.071674    8014 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:20:43.071707    8014 cni.go:84] Creating CNI manager for ""
	I1217 19:20:43.071769    8014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 19:20:43.071781    8014 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 19:20:43.071837    8014 start.go:353] cluster config:
	{Name:download-only-238357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-238357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:20:43.071967    8014 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:20:43.073544    8014 out.go:99] Starting "download-only-238357" primary control-plane node in "download-only-238357" cluster
	I1217 19:20:43.073567    8014 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 19:20:43.259032    8014 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 19:20:43.259062    8014 cache.go:65] Caching tarball of preloaded images
	I1217 19:20:43.259219    8014 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 19:20:43.261249    8014 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1217 19:20:43.261282    8014 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 19:20:43.369771    8014 preload.go:295] Got checksum from GCS API "46a82b10f18f180acaede5af8ca381a9"
	I1217 19:20:43.369829    8014 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:46a82b10f18f180acaede5af8ca381a9 -> /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 19:20:52.929490    8014 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 19:20:52.929918    8014 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/download-only-238357/config.json ...
	I1217 19:20:52.929963    8014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/download-only-238357/config.json: {Name:mk701c001d8adce0b6d61fadd3b4e115936d815f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:20:52.930147    8014 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 19:20:52.930342    8014 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22186-3611/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl
	
	
	* The control-plane node download-only-238357 host does not exist
	  To start a cluster, run: "minikube start -p download-only-238357"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-238357
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 19:20:56.459819    7531 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-144298 --alsologtostderr --binary-mirror http://127.0.0.1:44329 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-144298" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-144298
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (97.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-597150 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-597150 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m36.319428715s)
helpers_test.go:176: Cleaning up "offline-crio-597150" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-597150
--- PASS: TestOffline (97.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-886556
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-886556: exit status 85 (65.061189ms)

                                                
                                                
-- stdout --
	* Profile "addons-886556" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-886556"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-886556
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-886556: exit status 85 (61.691716ms)

                                                
                                                
-- stdout --
	* Profile "addons-886556" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-886556"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (137.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-886556 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-886556 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m17.749085421s)
--- PASS: TestAddons/Setup (137.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-886556 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-886556 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.57s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-886556 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-886556 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2be54a14-f7e4-4cce-a350-4f3c9438f053] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2be54a14-f7e4-4cce-a350-4f3c9438f053] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.004852311s
addons_test.go:696: (dbg) Run:  kubectl --context addons-886556 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-886556 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-886556 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 12.538408ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-7vxz4" [51d280f0-5585-48ff-9878-7cdf3f790c88] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006272389s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-zf2zm" [d7cb4d26-907e-4609-8385-a07e0958bd41] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004280731s
addons_test.go:394: (dbg) Run:  kubectl --context addons-886556 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-886556 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-886556 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.54071056s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 ip
2025/12/17 19:23:56 [DEBUG] GET http://192.168.39.92:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.44s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.293577ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-886556
addons_test.go:334: (dbg) Run:  kubectl --context addons-886556 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-5mtpv" [38f32547-b77a-46a7-a36d-8a131a614168] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00472788s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable inspektor-gadget --alsologtostderr -v=1: (6.201456532s)
--- PASS: TestAddons/parallel/InspektorGadget (12.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.19s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 21.427351ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-qq7z2" [1a0a29d5-b863-4f43-8e30-20e811421d49] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005534329s
addons_test.go:465: (dbg) Run:  kubectl --context addons-886556 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable metrics-server --alsologtostderr -v=1: (1.096951191s)
--- PASS: TestAddons/parallel/MetricsServer (7.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 19:24:03.416422    7531 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 19:24:03.423221    7531 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 19:24:03.423248    7531 kapi.go:107] duration metric: took 6.834203ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.843882ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-886556 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-886556 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [d54a8abf-2cc7-46f9-9acc-ac1bef9dc4c6] Pending
helpers_test.go:353: "task-pv-pod" [d54a8abf-2cc7-46f9-9acc-ac1bef9dc4c6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [d54a8abf-2cc7-46f9-9acc-ac1bef9dc4c6] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.005008677s
addons_test.go:574: (dbg) Run:  kubectl --context addons-886556 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-886556 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-886556 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-886556 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-886556 delete pod task-pv-pod: (1.025064943s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-886556 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-886556 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-886556 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [b2a03e71-b653-4bb5-8659-7dc1ca2e6adf] Pending
helpers_test.go:353: "task-pv-pod-restore" [b2a03e71-b653-4bb5-8659-7dc1ca2e6adf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [b2a03e71-b653-4bb5-8659-7dc1ca2e6adf] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.017225576s
addons_test.go:616: (dbg) Run:  kubectl --context addons-886556 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-886556 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-886556 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.125533176s)
--- PASS: TestAddons/parallel/CSI (43.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-886556 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-886556 --alsologtostderr -v=1: (1.161623679s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-nvtt8" [82aeca86-4ffb-4d10-99d7-7377d031c1db] Pending
helpers_test.go:353: "headlamp-dfcdc64b-nvtt8" [82aeca86-4ffb-4d10-99d7-7377d031c1db] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-nvtt8" [82aeca86-4ffb-4d10-99d7-7377d031c1db] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.009052378s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable headlamp --alsologtostderr -v=1: (6.04476518s)
--- PASS: TestAddons/parallel/Headlamp (23.22s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-pdt4n" [bb50b593-4cd8-49be-953f-23273df07595] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006750521s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (61.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-886556 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-886556 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [2a49b4c0-fab6-4d04-9841-655446e0b0a0] Pending
helpers_test.go:353: "test-local-path" [2a49b4c0-fab6-4d04-9841-655446e0b0a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [2a49b4c0-fab6-4d04-9841-655446e0b0a0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [2a49b4c0-fab6-4d04-9841-655446e0b0a0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.009755492s
addons_test.go:969: (dbg) Run:  kubectl --context addons-886556 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 ssh "cat /opt/local-path-provisioner/pvc-51a5db76-42c3-423c-b2d7-c24e496695a8_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-886556 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-886556 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.28770733s)
--- PASS: TestAddons/parallel/LocalPath (61.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-9r9hc" [687ccec9-fd49-4130-942a-adaa42174493] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007940898s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.00s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-dh6ln" [07dd12d6-5900-4771-80c8-515b158469fb] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004830312s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable yakd --alsologtostderr -v=1: (5.964380055s)
--- PASS: TestAddons/parallel/Yakd (10.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (89.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-886556
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-886556: (1m29.258374265s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-886556
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-886556
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-886556
--- PASS: TestAddons/StoppedEnableDisable (89.46s)

                                                
                                    
x
+
TestCertOptions (70.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-597207 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-597207 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m7.825901145s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-597207 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-597207 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-597207 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-597207" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-597207
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-597207: (1.890048661s)
--- PASS: TestCertOptions (70.13s)

                                                
                                    
x
+
TestCertExpiration (294.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-229742 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-229742 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (52.753385524s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-229742 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-229742 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m0.646663848s)
helpers_test.go:176: Cleaning up "cert-expiration-229742" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-229742
--- PASS: TestCertExpiration (294.27s)

                                                
                                    
x
+
TestForceSystemdFlag (100.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-747740 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1217 20:17:59.274496    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:16.195856    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-747740 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m39.8041393s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-747740 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-747740" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-747740
--- PASS: TestForceSystemdFlag (100.80s)

                                                
                                    
x
+
TestForceSystemdEnv (90.82s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-991241 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-991241 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m29.973660285s)
helpers_test.go:176: Cleaning up "force-systemd-env-991241" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-991241
--- PASS: TestForceSystemdEnv (90.82s)

                                                
                                    
x
+
TestErrorSpam/setup (38.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-801873 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-801873 --driver=kvm2  --container-runtime=crio
E1217 19:28:16.201272    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:16.207686    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:16.219069    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:16.240512    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:16.281932    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:16.363492    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:16.525115    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:16.846858    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:17.488918    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:18.770513    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:21.333423    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:26.454953    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:36.697391    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-801873 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-801873 --driver=kvm2  --container-runtime=crio: (38.555760137s)
--- PASS: TestErrorSpam/setup (38.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (5.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 stop: (2.004449319s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 stop: (1.838855141s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 stop
E1217 19:28:57.178780    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-801873 --log_dir /tmp/nospam-801873 stop: (1.465367974s)
--- PASS: TestErrorSpam/stop (5.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/test/nested/copy/7531/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345985 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1217 19:29:38.140388    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-345985 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m29.419585297s)
--- PASS: TestFunctional/serial/StartWithProxy (89.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 19:30:27.033987    7531 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345985 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-345985 --alsologtostderr -v=8: (31.249018741s)
functional_test.go:678: soft start took 31.249702167s for "functional-345985" cluster.
I1217 19:30:58.283307    7531 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (31.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-345985 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 cache add registry.k8s.io/pause:3.1: (1.054041471s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 cache add registry.k8s.io/pause:3.3
E1217 19:31:00.061999    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 cache add registry.k8s.io/pause:3.3: (1.096101199s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 cache add registry.k8s.io/pause:latest: (1.115498756s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-345985 /tmp/TestFunctionalserialCacheCmdcacheadd_local1103234978/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 cache add minikube-local-cache-test:functional-345985
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 cache add minikube-local-cache-test:functional-345985: (1.915621091s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 cache delete minikube-local-cache-test:functional-345985
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-345985
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (169.326597ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 kubectl -- --context functional-345985 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-345985 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345985 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-345985 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.847458592s)
functional_test.go:776: restart took 35.847556895s for "functional-345985" cluster.
I1217 19:31:41.973497    7531 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (35.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-345985 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 logs: (1.290580482s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 logs --file /tmp/TestFunctionalserialLogsFileCmd3155013434/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 logs --file /tmp/TestFunctionalserialLogsFileCmd3155013434/001/logs.txt: (1.286523092s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-345985 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-345985
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-345985: exit status 115 (228.899447ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.4:31270 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-345985 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-345985 delete -f testdata/invalidsvc.yaml: (1.492434887s)
--- PASS: TestFunctional/serial/InvalidService (4.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 config get cpus: exit status 14 (65.748354ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 config get cpus: exit status 14 (58.960769ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345985 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345985 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 14297: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345985 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-345985 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (110.57948ms)

                                                
                                                
-- stdout --
	* [functional-345985] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:32:25.443904   14548 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:32:25.444141   14548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:32:25.444151   14548 out.go:374] Setting ErrFile to fd 2...
	I1217 19:32:25.444156   14548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:32:25.444377   14548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:32:25.444842   14548 out.go:368] Setting JSON to false
	I1217 19:32:25.445711   14548 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":884,"bootTime":1765999061,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:32:25.445770   14548 start.go:143] virtualization: kvm guest
	I1217 19:32:25.447799   14548 out.go:179] * [functional-345985] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:32:25.449030   14548 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:32:25.449025   14548 notify.go:221] Checking for updates...
	I1217 19:32:25.450391   14548 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:32:25.451832   14548 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:32:25.453101   14548 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:32:25.454199   14548 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:32:25.455556   14548 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:32:25.457138   14548 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:32:25.457718   14548 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:32:25.490182   14548 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 19:32:25.491490   14548 start.go:309] selected driver: kvm2
	I1217 19:32:25.491505   14548 start.go:927] validating driver "kvm2" against &{Name:functional-345985 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-345985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:32:25.491627   14548 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:32:25.493788   14548 out.go:203] 
	W1217 19:32:25.494967   14548 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 19:32:25.496010   14548 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345985 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345985 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-345985 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (111.797729ms)

                                                
                                                
-- stdout --
	* [functional-345985] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:32:10.483055   13990 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:32:10.483148   13990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:32:10.483159   13990 out.go:374] Setting ErrFile to fd 2...
	I1217 19:32:10.483164   13990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:32:10.483440   13990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:32:10.483863   13990 out.go:368] Setting JSON to false
	I1217 19:32:10.484653   13990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":869,"bootTime":1765999061,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:32:10.484705   13990 start.go:143] virtualization: kvm guest
	I1217 19:32:10.486700   13990 out.go:179] * [functional-345985] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 19:32:10.487880   13990 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:32:10.487886   13990 notify.go:221] Checking for updates...
	I1217 19:32:10.490219   13990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:32:10.491245   13990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:32:10.492241   13990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:32:10.493296   13990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:32:10.494313   13990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:32:10.496321   13990 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:32:10.496797   13990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:32:10.530048   13990 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 19:32:10.531110   13990 start.go:309] selected driver: kvm2
	I1217 19:32:10.531124   13990 start.go:927] validating driver "kvm2" against &{Name:functional-345985 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-345985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:32:10.531215   13990 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:32:10.532919   13990 out.go:203] 
	W1217 19:32:10.533899   13990 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 19:32:10.534808   13990 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-345985 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-345985 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-xsqfl" [1618d6c5-e948-44a2-8f8e-051e82da32c9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-xsqfl" [1618d6c5-e948-44a2-8f8e-051e82da32c9] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.006499872s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.4:31347
functional_test.go:1680: http://192.168.39.4:31347: success! body:
Request served by hello-node-connect-7d85dfc575-xsqfl

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.4:31347
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [d91cee4a-f1a0-459d-bda5-af0a041d3f2e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006503832s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-345985 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-345985 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-345985 get pvc myclaim -o=json
I1217 19:31:55.830581    7531 retry.go:31] will retry after 2.690135546s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2f273d83-3459-4ca8-9bf2-6e3d0ddfeb4e ResourceVersion:701 Generation:0 CreationTimestamp:2025-12-17 19:31:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c102b0 VolumeMode:0xc001c102c0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-345985 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-345985 apply -f testdata/storage-provisioner/pod.yaml
I1217 19:31:58.723226    7531 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c3e9a1e7-b931-48b0-a842-d6ad032ab02a] Pending
helpers_test.go:353: "sp-pod" [c3e9a1e7-b931-48b0-a842-d6ad032ab02a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [c3e9a1e7-b931-48b0-a842-d6ad032ab02a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.009085447s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-345985 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-345985 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-345985 delete -f testdata/storage-provisioner/pod.yaml: (1.153613413s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-345985 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [acef658d-5a4a-4324-b617-ed771de11f6a] Pending
helpers_test.go:353: "sp-pod" [acef658d-5a4a-4324-b617-ed771de11f6a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [acef658d-5a4a-4324-b617-ed771de11f6a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00441889s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-345985 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh -n functional-345985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 cp functional-345985:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2856038472/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh -n functional-345985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh -n functional-345985 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-345985 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-924zl" [444245c3-8bd8-484e-8cb6-171cb22697d2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-924zl" [444245c3-8bd8-484e-8cb6-171cb22697d2] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.009080565s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345985 exec mysql-6bcdcbc558-924zl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-345985 exec mysql-6bcdcbc558-924zl -- mysql -ppassword -e "show databases;": exit status 1 (208.833735ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:32:16.224682    7531 retry.go:31] will retry after 999.598887ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345985 exec mysql-6bcdcbc558-924zl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-345985 exec mysql-6bcdcbc558-924zl -- mysql -ppassword -e "show databases;": exit status 1 (170.175784ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:32:17.394785    7531 retry.go:31] will retry after 984.729669ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345985 exec mysql-6bcdcbc558-924zl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-345985 exec mysql-6bcdcbc558-924zl -- mysql -ppassword -e "show databases;": exit status 1 (211.043637ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:32:18.591522    7531 retry.go:31] will retry after 3.370285351s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345985 exec mysql-6bcdcbc558-924zl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-345985 exec mysql-6bcdcbc558-924zl -- mysql -ppassword -e "show databases;": exit status 1 (160.866551ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:32:22.123678    7531 retry.go:31] will retry after 4.811159685s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345985 exec mysql-6bcdcbc558-924zl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7531/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo cat /etc/test/nested/copy/7531/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7531.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo cat /etc/ssl/certs/7531.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7531.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo cat /usr/share/ca-certificates/7531.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/75312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo cat /etc/ssl/certs/75312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/75312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo cat /usr/share/ca-certificates/75312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-345985 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 ssh "sudo systemctl is-active docker": exit status 1 (164.257591ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 ssh "sudo systemctl is-active containerd": exit status 1 (175.245346ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-345985 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-345985 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-69vm7" [771bac3b-0647-4714-bb0e-bb1ba23f94c6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-69vm7" [771bac3b-0647-4714-bb0e-bb1ba23f94c6] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.011359764s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345985 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-345985
localhost/kicbase/echo-server:functional-345985
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345985 image ls --format short --alsologtostderr:
I1217 19:32:26.998423   14661 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:26.998698   14661 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:26.998711   14661 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:26.998717   14661 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:26.998983   14661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:32:26.999627   14661 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:26.999726   14661 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.002016   14661 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:27.004434   14661 main.go:143] libmachine: domain functional-345985 has defined MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.004869   14661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9f:b6:2b", ip: ""} in network mk-functional-345985: {Iface:virbr1 ExpiryTime:2025-12-17 20:29:12 +0000 UTC Type:0 Mac:52:54:00:9f:b6:2b Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-345985 Clientid:01:52:54:00:9f:b6:2b}
I1217 19:32:27.004895   14661 main.go:143] libmachine: domain functional-345985 has defined IP address 192.168.39.4 and MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.005114   14661 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-345985/id_rsa Username:docker}
I1217 19:32:27.133280   14661 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345985 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-345985  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-345985  │ 10da9618eb60c │ 3.33kB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.3            │ 5826b25d990d7 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.3            │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.3            │ aa27095f56193 │ 89.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3            │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345985 image ls --format table --alsologtostderr:
I1217 19:32:27.885055   14728 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:27.885358   14728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.885373   14728 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:27.885381   14728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.885688   14728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:32:27.886564   14728 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.886707   14728 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.889328   14728 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:27.891970   14728 main.go:143] libmachine: domain functional-345985 has defined MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.892479   14728 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9f:b6:2b", ip: ""} in network mk-functional-345985: {Iface:virbr1 ExpiryTime:2025-12-17 20:29:12 +0000 UTC Type:0 Mac:52:54:00:9f:b6:2b Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-345985 Clientid:01:52:54:00:9f:b6:2b}
I1217 19:32:27.892518   14728 main.go:143] libmachine: domain functional-345985 has defined IP address 192.168.39.4 and MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.892733   14728 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-345985/id_rsa Username:docker}
I1217 19:32:27.998470   14728 ssh_runner.go:195] Run: sudo crictl images --output json
2025/12/17 19:32:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345985 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localho
st/kicbase/echo-server:functional-345985"],"size":"4945146"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"10da9618eb60c11b3f7dcde5c8402d759f24637bb02a80fa9a9a782251e9402c","repoDigests":["localhost/minikube-local-cache-test@sha256:a80e9133dbe7a22defeab63a696c9cb3118a42bfe944e72a3a347bfdc4d57105"],"repoTags":["localhost/minikube-local-cache-test:functional-345985"],"size":"3330"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags
":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53853013"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:
ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags"
:["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mys
ql:8.4"],"size":"803724943"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha25
6:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345985 image ls --format json --alsologtostderr:
I1217 19:32:27.520852   14700 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:27.520964   14700 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.520975   14700 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:27.520978   14700 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.521191   14700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:32:27.521746   14700 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.521853   14700 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.523551   14700 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:27.525585   14700 main.go:143] libmachine: domain functional-345985 has defined MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.525961   14700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9f:b6:2b", ip: ""} in network mk-functional-345985: {Iface:virbr1 ExpiryTime:2025-12-17 20:29:12 +0000 UTC Type:0 Mac:52:54:00:9f:b6:2b Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-345985 Clientid:01:52:54:00:9f:b6:2b}
I1217 19:32:27.525997   14700 main.go:143] libmachine: domain functional-345985 has defined IP address 192.168.39.4 and MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.526145   14700 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-345985/id_rsa Username:docker}
I1217 19:32:27.649243   14700 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345985 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-345985
size: "4945146"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 10da9618eb60c11b3f7dcde5c8402d759f24637bb02a80fa9a9a782251e9402c
repoDigests:
- localhost/minikube-local-cache-test@sha256:a80e9133dbe7a22defeab63a696c9cb3118a42bfe944e72a3a347bfdc4d57105
repoTags:
- localhost/minikube-local-cache-test:functional-345985
size: "3330"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345985 image ls --format yaml --alsologtostderr:
I1217 19:32:27.225654   14674 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:27.225755   14674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.225767   14674 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:27.225773   14674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.225968   14674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:32:27.226574   14674 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.226698   14674 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.229187   14674 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:27.231497   14674 main.go:143] libmachine: domain functional-345985 has defined MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.231922   14674 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9f:b6:2b", ip: ""} in network mk-functional-345985: {Iface:virbr1 ExpiryTime:2025-12-17 20:29:12 +0000 UTC Type:0 Mac:52:54:00:9f:b6:2b Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-345985 Clientid:01:52:54:00:9f:b6:2b}
I1217 19:32:27.231948   14674 main.go:143] libmachine: domain functional-345985 has defined IP address 192.168.39.4 and MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.232085   14674 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-345985/id_rsa Username:docker}
I1217 19:32:27.368837   14674 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 ssh pgrep buildkitd: exit status 1 (222.326684ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image build -t localhost/my-image:functional-345985 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 image build -t localhost/my-image:functional-345985 testdata/build --alsologtostderr: (5.582394758s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345985 image build -t localhost/my-image:functional-345985 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0bb3972f51c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-345985
--> ab4d23822e5
Successfully tagged localhost/my-image:functional-345985
ab4d23822e598cd06e75046f3aa9f4f1cfb924f2c91563f338eda2c61625fd1a
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345985 image build -t localhost/my-image:functional-345985 testdata/build --alsologtostderr:
I1217 19:32:27.510112   14694 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:27.510265   14694 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.510276   14694 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:27.510282   14694 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.510518   14694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:32:27.511082   14694 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.511749   14694 config.go:182] Loaded profile config "functional-345985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.514466   14694 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:27.517208   14694 main.go:143] libmachine: domain functional-345985 has defined MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.517699   14694 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9f:b6:2b", ip: ""} in network mk-functional-345985: {Iface:virbr1 ExpiryTime:2025-12-17 20:29:12 +0000 UTC Type:0 Mac:52:54:00:9f:b6:2b Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-345985 Clientid:01:52:54:00:9f:b6:2b}
I1217 19:32:27.517738   14694 main.go:143] libmachine: domain functional-345985 has defined IP address 192.168.39.4 and MAC address 52:54:00:9f:b6:2b in network mk-functional-345985
I1217 19:32:27.517920   14694 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-345985/id_rsa Username:docker}
I1217 19:32:27.629668   14694 build_images.go:162] Building image from path: /tmp/build.1534672657.tar
I1217 19:32:27.629734   14694 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 19:32:27.662685   14694 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1534672657.tar
I1217 19:32:27.692500   14694 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1534672657.tar: stat -c "%s %y" /var/lib/minikube/build/build.1534672657.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1534672657.tar': No such file or directory
I1217 19:32:27.692557   14694 ssh_runner.go:362] scp /tmp/build.1534672657.tar --> /var/lib/minikube/build/build.1534672657.tar (3072 bytes)
I1217 19:32:27.757555   14694 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1534672657
I1217 19:32:27.781731   14694 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1534672657 -xf /var/lib/minikube/build/build.1534672657.tar
I1217 19:32:27.803791   14694 crio.go:315] Building image: /var/lib/minikube/build/build.1534672657
I1217 19:32:27.803889   14694 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-345985 /var/lib/minikube/build/build.1534672657 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 19:32:32.996024   14694 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-345985 /var/lib/minikube/build/build.1534672657 --cgroup-manager=cgroupfs: (5.192105764s)
I1217 19:32:32.996082   14694 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1534672657
I1217 19:32:33.010869   14694 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1534672657.tar
I1217 19:32:33.023873   14694 build_images.go:218] Built localhost/my-image:functional-345985 from /tmp/build.1534672657.tar
I1217 19:32:33.023913   14694 build_images.go:134] succeeded building to: functional-345985
I1217 19:32:33.023920   14694 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.952883044s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-345985
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image load --daemon kicbase/echo-server:functional-345985 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 image load --daemon kicbase/echo-server:functional-345985 --alsologtostderr: (1.000183739s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image load --daemon kicbase/echo-server:functional-345985 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-345985
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image load --daemon kicbase/echo-server:functional-345985 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image save kicbase/echo-server:functional-345985 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 service list -o json
functional_test.go:1504: Took "287.118051ms" to run "out/minikube-linux-amd64 -p functional-345985 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.4:30347
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.4:30347
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.342157928s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-345985
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 image save --daemon kicbase/echo-server:functional-345985 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-345985 image save --daemon kicbase/echo-server:functional-345985 --alsologtostderr: (6.88183255s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-345985
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "271.834055ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.619956ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "300.178635ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.783234ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdany-port4074818011/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765999931582578981" to /tmp/TestFunctionalparallelMountCmdany-port4074818011/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765999931582578981" to /tmp/TestFunctionalparallelMountCmdany-port4074818011/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765999931582578981" to /tmp/TestFunctionalparallelMountCmdany-port4074818011/001/test-1765999931582578981
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (166.155659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:32:11.749094    7531 retry.go:31] will retry after 596.501274ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 19:32 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 19:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 19:32 test-1765999931582578981
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh cat /mount-9p/test-1765999931582578981
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-345985 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [993625c5-c373-4af8-8444-8f36fdeb9da6] Pending
helpers_test.go:353: "busybox-mount" [993625c5-c373-4af8-8444-8f36fdeb9da6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [993625c5-c373-4af8-8444-8f36fdeb9da6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [993625c5-c373-4af8-8444-8f36fdeb9da6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.00367111s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-345985 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdany-port4074818011/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdspecific-port3788846189/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (186.8114ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:32:22.833398    7531 retry.go:31] will retry after 634.361805ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdspecific-port3788846189/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 ssh "sudo umount -f /mount-9p": exit status 1 (216.21928ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-345985 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdspecific-port3788846189/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
I1217 19:32:24.242977    7531 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2990019916/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2990019916/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2990019916/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T" /mount1: exit status 1 (227.653294ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:32:24.497851    7531 retry.go:31] will retry after 256.662854ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345985 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-345985 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2990019916/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2990019916/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2990019916/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-345985
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-345985
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-345985
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22186-3611/.minikube/files/etc/test/nested/copy/7531/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (81.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841762 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1217 19:33:16.198469    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:33:43.907015    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-841762 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m21.523686362s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (81.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (30.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1217 19:33:55.818149    7531 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841762 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-841762 --alsologtostderr -v=8: (30.44294695s)
functional_test.go:678: soft start took 30.44327958s for "functional-841762" cluster.
I1217 19:34:26.261405    7531 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (30.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-841762 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 cache add registry.k8s.io/pause:3.1: (1.065304946s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 cache add registry.k8s.io/pause:3.3: (1.147709096s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 cache add registry.k8s.io/pause:latest: (1.069665765s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC1408606167/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 cache add minikube-local-cache-test:functional-841762
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 cache add minikube-local-cache-test:functional-841762: (1.912782444s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 cache delete minikube-local-cache-test:functional-841762
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-841762
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.610246ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 kubectl -- --context functional-841762 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-841762 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (34.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-841762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.505988996s)
functional_test.go:776: restart took 34.506108392s for "functional-841762" cluster.
I1217 19:35:08.573435    7531 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (34.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-841762 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 logs: (1.292039316s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi3689586016/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi3689586016/001/logs.txt: (1.276209701s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-841762 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-841762
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-841762: exit status 115 (232.880142ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.238:32108 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-841762 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-841762 delete -f testdata/invalidsvc.yaml: (1.216171354s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 config get cpus: exit status 14 (58.921352ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 config get cpus: exit status 14 (68.081694ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (21.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-841762 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-841762 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 17054: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (21.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-841762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (104.078718ms)

                                                
                                                
-- stdout --
	* [functional-841762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:35:48.170746   16978 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:35:48.170833   16978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:35:48.170843   16978 out.go:374] Setting ErrFile to fd 2...
	I1217 19:35:48.170849   16978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:35:48.171049   16978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:35:48.171524   16978 out.go:368] Setting JSON to false
	I1217 19:35:48.172358   16978 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1087,"bootTime":1765999061,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:35:48.172418   16978 start.go:143] virtualization: kvm guest
	I1217 19:35:48.174334   16978 out.go:179] * [functional-841762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:35:48.175707   16978 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:35:48.175715   16978 notify.go:221] Checking for updates...
	I1217 19:35:48.177599   16978 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:35:48.178662   16978 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:35:48.179752   16978 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:35:48.180708   16978 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:35:48.181741   16978 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:35:48.183269   16978 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:35:48.183754   16978 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:35:48.213870   16978 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 19:35:48.214842   16978 start.go:309] selected driver: kvm2
	I1217 19:35:48.214853   16978 start.go:927] validating driver "kvm2" against &{Name:functional-841762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-841762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:35:48.214938   16978 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:35:48.216701   16978 out.go:203] 
	W1217 19:35:48.217613   16978 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 19:35:48.218589   16978 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841762 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-841762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (105.881426ms)

                                                
                                                
-- stdout --
	* [functional-841762] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:35:43.989511   16829 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:35:43.989769   16829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:35:43.989779   16829 out.go:374] Setting ErrFile to fd 2...
	I1217 19:35:43.989785   16829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:35:43.990073   16829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:35:43.990486   16829 out.go:368] Setting JSON to false
	I1217 19:35:43.991267   16829 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1083,"bootTime":1765999061,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:35:43.991316   16829 start.go:143] virtualization: kvm guest
	I1217 19:35:43.993194   16829 out.go:179] * [functional-841762] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 19:35:43.994237   16829 notify.go:221] Checking for updates...
	I1217 19:35:43.994265   16829 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:35:43.995548   16829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:35:43.997009   16829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 19:35:43.997985   16829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 19:35:43.998956   16829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:35:43.999880   16829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:35:44.001148   16829 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:35:44.001644   16829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:35:44.031309   16829 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 19:35:44.032259   16829 start.go:309] selected driver: kvm2
	I1217 19:35:44.032271   16829 start.go:927] validating driver "kvm2" against &{Name:functional-841762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-841762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:35:44.032359   16829 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:35:44.034155   16829 out.go:203] 
	W1217 19:35:44.035180   16829 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 19:35:44.036147   16829 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (25.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-841762 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-841762 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-6297j" [e3a3b2f1-09e3-41dc-8931-e6b29041d767] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-6297j" [e3a3b2f1-09e3-41dc-8931-e6b29041d767] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 25.008854018s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.238:31087
functional_test.go:1680: http://192.168.39.238:31087: success! body:
Request served by hello-node-connect-9f67c86d4-6297j

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.238:31087
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (25.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (39.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [e5017d8b-21e4-40fd-808f-cd96a777bed9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006606142s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-841762 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-841762 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-841762 get pvc myclaim -o=json
I1217 19:35:23.784167    7531 retry.go:31] will retry after 2.040860374s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:27b6c6b4-d617-4c65-83dd-0b741892b097 ResourceVersion:712 Generation:0 CreationTimestamp:2025-12-17 19:35:23 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00195e5c0 VolumeMode:0xc00195e5d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-841762 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-841762 apply -f testdata/storage-provisioner/pod.yaml
I1217 19:35:26.056584    7531 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [46a37859-9e6d-4fb6-aabf-984a101299bc] Pending
helpers_test.go:353: "sp-pod" [46a37859-9e6d-4fb6-aabf-984a101299bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [46a37859-9e6d-4fb6-aabf-984a101299bc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004536416s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-841762 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-841762 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-841762 delete -f testdata/storage-provisioner/pod.yaml: (1.558220326s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-841762 apply -f testdata/storage-provisioner/pod.yaml
I1217 19:35:50.945639    7531 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8ce4413e-b87a-4290-917d-fa9662ae03d0] Pending
helpers_test.go:353: "sp-pod" [8ce4413e-b87a-4290-917d-fa9662ae03d0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.009715336s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-841762 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (39.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh -n functional-841762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 cp functional-841762:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm2266172954/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh -n functional-841762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh -n functional-841762 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (31.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-841762 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-x72t5" [2b809106-6712-4810-aa21-3c7849fb4c26] Pending
helpers_test.go:353: "mysql-7d7b65bc95-x72t5" [2b809106-6712-4810-aa21-3c7849fb4c26] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-x72t5" [2b809106-6712-4810-aa21-3c7849fb4c26] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 25.009369582s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-841762 exec mysql-7d7b65bc95-x72t5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-841762 exec mysql-7d7b65bc95-x72t5 -- mysql -ppassword -e "show databases;": exit status 1 (261.884107ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:35:41.756509    7531 retry.go:31] will retry after 864.693465ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-841762 exec mysql-7d7b65bc95-x72t5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-841762 exec mysql-7d7b65bc95-x72t5 -- mysql -ppassword -e "show databases;": exit status 1 (161.633865ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:35:42.783490    7531 retry.go:31] will retry after 1.036879447s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-841762 exec mysql-7d7b65bc95-x72t5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-841762 exec mysql-7d7b65bc95-x72t5 -- mysql -ppassword -e "show databases;": exit status 1 (148.066709ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:35:43.969079    7531 retry.go:31] will retry after 3.28938835s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-841762 exec mysql-7d7b65bc95-x72t5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (31.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7531/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo cat /etc/test/nested/copy/7531/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7531.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo cat /etc/ssl/certs/7531.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7531.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo cat /usr/share/ca-certificates/7531.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/75312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo cat /etc/ssl/certs/75312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/75312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo cat /usr/share/ca-certificates/75312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-841762 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 ssh "sudo systemctl is-active docker": exit status 1 (166.881812ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 ssh "sudo systemctl is-active containerd": exit status 1 (170.900369ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-841762 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-841762
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841762 image ls --format short --alsologtostderr:
I1217 19:35:53.655618   17265 out.go:360] Setting OutFile to fd 1 ...
I1217 19:35:53.655753   17265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:53.655763   17265 out.go:374] Setting ErrFile to fd 2...
I1217 19:35:53.655770   17265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:53.656010   17265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:35:53.656638   17265 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:53.656760   17265 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:53.658829   17265 ssh_runner.go:195] Run: systemctl --version
I1217 19:35:53.661177   17265 main.go:143] libmachine: domain functional-841762 has defined MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:53.661785   17265 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:95:31", ip: ""} in network mk-functional-841762: {Iface:virbr1 ExpiryTime:2025-12-17 20:32:49 +0000 UTC Type:0 Mac:52:54:00:12:95:31 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:functional-841762 Clientid:01:52:54:00:12:95:31}
I1217 19:35:53.661822   17265 main.go:143] libmachine: domain functional-841762 has defined IP address 192.168.39.238 and MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:53.662018   17265 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-841762/id_rsa Username:docker}
I1217 19:35:53.758149   17265 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-841762 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1       │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1       │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1       │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-841762  │ 10da9618eb60c │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0            │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1       │ 58865405a13bc │ 90.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841762 image ls --format table --alsologtostderr:
I1217 19:35:54.363155   17354 out.go:360] Setting OutFile to fd 1 ...
I1217 19:35:54.363480   17354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:54.363494   17354 out.go:374] Setting ErrFile to fd 2...
I1217 19:35:54.363501   17354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:54.363835   17354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:35:54.364503   17354 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:54.364617   17354 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:54.366977   17354 ssh_runner.go:195] Run: systemctl --version
I1217 19:35:54.369505   17354 main.go:143] libmachine: domain functional-841762 has defined MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:54.369987   17354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:95:31", ip: ""} in network mk-functional-841762: {Iface:virbr1 ExpiryTime:2025-12-17 20:32:49 +0000 UTC Type:0 Mac:52:54:00:12:95:31 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:functional-841762 Clientid:01:52:54:00:12:95:31}
I1217 19:35:54.370026   17354 main.go:143] libmachine: domain functional-841762 has defined IP address 192.168.39.238 and MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:54.370183   17354 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-841762/id_rsa Username:docker}
I1217 19:35:54.482352   17354 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-841762 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c
91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/ng
inx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f9
82ce05b1ddb9b282b780fc86"],"repoTags":["docker.io/kicbase/echo-server:latest"],"size":"4943877"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kub
e-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"10da9618eb60c11b3f7dcde5c8402d759f24637bb02a80fa9a9a782251e9402c","repoDigests":["localhost/minikube-local-cache-test@sha256:a80e9133dbe7a22defeab63a696c9cb3118a42bfe944e72a3a347bfdc4d57105"],"repoTags":["l
ocalhost/minikube-local-cache-test:functional-841762"],"size":"3330"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s
.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841762 image ls --format json --alsologtostderr:
I1217 19:35:54.135754   17308 out.go:360] Setting OutFile to fd 1 ...
I1217 19:35:54.136117   17308 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:54.136132   17308 out.go:374] Setting ErrFile to fd 2...
I1217 19:35:54.136140   17308 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:54.136500   17308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:35:54.137402   17308 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:54.137573   17308 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:54.140361   17308 ssh_runner.go:195] Run: systemctl --version
I1217 19:35:54.143141   17308 main.go:143] libmachine: domain functional-841762 has defined MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:54.143648   17308 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:95:31", ip: ""} in network mk-functional-841762: {Iface:virbr1 ExpiryTime:2025-12-17 20:32:49 +0000 UTC Type:0 Mac:52:54:00:12:95:31 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:functional-841762 Clientid:01:52:54:00:12:95:31}
I1217 19:35:54.143682   17308 main.go:143] libmachine: domain functional-841762 has defined IP address 192.168.39.238 and MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:54.143863   17308 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-841762/id_rsa Username:docker}
I1217 19:35:54.229982   17308 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-841762 image ls --format yaml --alsologtostderr:
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 10da9618eb60c11b3f7dcde5c8402d759f24637bb02a80fa9a9a782251e9402c
repoDigests:
- localhost/minikube-local-cache-test@sha256:a80e9133dbe7a22defeab63a696c9cb3118a42bfe944e72a3a347bfdc4d57105
repoTags:
- localhost/minikube-local-cache-test:functional-841762
size: "3330"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
repoTags:
- docker.io/kicbase/echo-server:latest
size: "4943877"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841762 image ls --format yaml --alsologtostderr:
I1217 19:35:53.879846   17287 out.go:360] Setting OutFile to fd 1 ...
I1217 19:35:53.879975   17287 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:53.879986   17287 out.go:374] Setting ErrFile to fd 2...
I1217 19:35:53.879992   17287 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:53.880202   17287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:35:53.880734   17287 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:53.880818   17287 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:53.883188   17287 ssh_runner.go:195] Run: systemctl --version
I1217 19:35:53.885976   17287 main.go:143] libmachine: domain functional-841762 has defined MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:53.886408   17287 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:95:31", ip: ""} in network mk-functional-841762: {Iface:virbr1 ExpiryTime:2025-12-17 20:32:49 +0000 UTC Type:0 Mac:52:54:00:12:95:31 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:functional-841762 Clientid:01:52:54:00:12:95:31}
I1217 19:35:53.886435   17287 main.go:143] libmachine: domain functional-841762 has defined IP address 192.168.39.238 and MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:53.886619   17287 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-841762/id_rsa Username:docker}
I1217 19:35:53.979978   17287 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (6.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 ssh pgrep buildkitd: exit status 1 (165.588601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image build -t localhost/my-image:functional-841762 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 image build -t localhost/my-image:functional-841762 testdata/build --alsologtostderr: (6.133354375s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-841762 image build -t localhost/my-image:functional-841762 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b35836f46cc
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-841762
--> bdb4d774f1f
Successfully tagged localhost/my-image:functional-841762
bdb4d774f1fc9abf76549370512818d06b4371ec4a0675aa394484588f85afb6
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841762 image build -t localhost/my-image:functional-841762 testdata/build --alsologtostderr:
I1217 19:35:54.790458   17379 out.go:360] Setting OutFile to fd 1 ...
I1217 19:35:54.790761   17379 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:54.790772   17379 out.go:374] Setting ErrFile to fd 2...
I1217 19:35:54.790778   17379 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:35:54.790960   17379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:35:54.791499   17379 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:54.792583   17379 config.go:182] Loaded profile config "functional-841762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:35:54.795110   17379 ssh_runner.go:195] Run: systemctl --version
I1217 19:35:54.797044   17379 main.go:143] libmachine: domain functional-841762 has defined MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:54.797473   17379 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:95:31", ip: ""} in network mk-functional-841762: {Iface:virbr1 ExpiryTime:2025-12-17 20:32:49 +0000 UTC Type:0 Mac:52:54:00:12:95:31 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:functional-841762 Clientid:01:52:54:00:12:95:31}
I1217 19:35:54.797506   17379 main.go:143] libmachine: domain functional-841762 has defined IP address 192.168.39.238 and MAC address 52:54:00:12:95:31 in network mk-functional-841762
I1217 19:35:54.797650   17379 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/functional-841762/id_rsa Username:docker}
I1217 19:35:54.890269   17379 build_images.go:162] Building image from path: /tmp/build.2196904540.tar
I1217 19:35:54.890331   17379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 19:35:54.913955   17379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2196904540.tar
I1217 19:35:54.924173   17379 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2196904540.tar: stat -c "%s %y" /var/lib/minikube/build/build.2196904540.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2196904540.tar': No such file or directory
I1217 19:35:54.924215   17379 ssh_runner.go:362] scp /tmp/build.2196904540.tar --> /var/lib/minikube/build/build.2196904540.tar (3072 bytes)
I1217 19:35:54.986508   17379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2196904540
I1217 19:35:55.001445   17379 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2196904540 -xf /var/lib/minikube/build/build.2196904540.tar
I1217 19:35:55.014362   17379 crio.go:315] Building image: /var/lib/minikube/build/build.2196904540
I1217 19:35:55.014423   17379 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-841762 /var/lib/minikube/build/build.2196904540 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 19:36:00.813608   17379 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-841762 /var/lib/minikube/build/build.2196904540 --cgroup-manager=cgroupfs: (5.799136775s)
I1217 19:36:00.813698   17379 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2196904540
I1217 19:36:00.837289   17379 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2196904540.tar
I1217 19:36:00.858918   17379 build_images.go:218] Built localhost/my-image:functional-841762 from /tmp/build.2196904540.tar
I1217 19:36:00.858963   17379 build_images.go:134] succeeded building to: functional-841762
I1217 19:36:00.858971   17379 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image ls
2025/12/17 19:36:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (6.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-841762
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image load --daemon kicbase/echo-server:functional-841762 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 image load --daemon kicbase/echo-server:functional-841762 --alsologtostderr: (1.193422644s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "245.817518ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.683698ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "279.848762ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "68.144647ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image load --daemon kicbase/echo-server:functional-841762 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (2.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-841762
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image load --daemon kicbase/echo-server:functional-841762 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (2.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 image save kicbase/echo-server:functional-841762 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (15.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-841762 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-841762 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-ffdzl" [8e2a3455-ab5b-4f11-a36d-83b5ab4e80b7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-ffdzl" [8e2a3455-ab5b-4f11-a36d-83b5ab4e80b7] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.02455218s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (15.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (8.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2267986950/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1766000144041277828" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2267986950/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1766000144041277828" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2267986950/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1766000144041277828" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2267986950/001/test-1766000144041277828
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (147.92576ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:35:44.189511    7531 retry.go:31] will retry after 332.232053ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 19:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 19:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 19:35 test-1766000144041277828
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh cat /mount-9p/test-1766000144041277828
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-841762 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [d69d326d-bd08-4f49-b4d3-df4e6b4dc1a2] Pending
helpers_test.go:353: "busybox-mount" [d69d326d-bd08-4f49-b4d3-df4e6b4dc1a2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [d69d326d-bd08-4f49-b4d3-df4e6b4dc1a2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [d69d326d-bd08-4f49-b4d3-df4e6b4dc1a2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.003975707s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-841762 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2267986950/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (8.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-841762 service list -o json: (1.198138033s)
functional_test.go:1504: Took "1.198246114s" to run "out/minikube-linux-amd64 -p functional-841762 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.238:31400
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.238:31400
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1311660400/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (223.676499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:35:53.080700    7531 retry.go:31] will retry after 391.497387ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1311660400/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 ssh "sudo umount -f /mount-9p": exit status 1 (196.715952ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-841762 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1311660400/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3535575192/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3535575192/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3535575192/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T" /mount1: exit status 1 (190.863906ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:35:54.442814    7531 retry.go:31] will retry after 516.284035ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-841762 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-841762 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3535575192/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3535575192/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841762 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3535575192/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-841762
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-841762
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-841762
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1217 19:36:49.531476    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:49.537871    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:49.549294    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:49.570729    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:49.612108    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:49.693513    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:49.855088    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:50.177003    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:50.818670    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:52.099969    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:54.661567    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:59.783469    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:37:10.025711    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:37:30.507794    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:38:11.470240    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:38:16.195746    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:33.391911    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m33.303472967s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (213.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 kubectl -- rollout status deployment/busybox: (4.909093552s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-8nk5t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-lh22z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-r2xbp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-8nk5t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-lh22z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-r2xbp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-8nk5t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-lh22z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-r2xbp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-8nk5t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-8nk5t -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-lh22z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-lh22z -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-r2xbp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 kubectl -- exec busybox-7b57f96db7-r2xbp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 node add --alsologtostderr -v 5
E1217 19:40:16.485330    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:16.491731    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:16.503133    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:16.524496    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:16.565899    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:16.647770    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:16.809329    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:17.130796    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:17.772797    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:19.054572    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:21.616477    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:26.738549    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:36.980488    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 node add --alsologtostderr -v 5: (44.067189269s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-759753 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp testdata/cp-test.txt ha-759753:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2506861656/001/cp-test_ha-759753.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753:/home/docker/cp-test.txt ha-759753-m02:/home/docker/cp-test_ha-759753_ha-759753-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m02 "sudo cat /home/docker/cp-test_ha-759753_ha-759753-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753:/home/docker/cp-test.txt ha-759753-m03:/home/docker/cp-test_ha-759753_ha-759753-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m03 "sudo cat /home/docker/cp-test_ha-759753_ha-759753-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753:/home/docker/cp-test.txt ha-759753-m04:/home/docker/cp-test_ha-759753_ha-759753-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m04 "sudo cat /home/docker/cp-test_ha-759753_ha-759753-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp testdata/cp-test.txt ha-759753-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2506861656/001/cp-test_ha-759753-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m02:/home/docker/cp-test.txt ha-759753:/home/docker/cp-test_ha-759753-m02_ha-759753.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753 "sudo cat /home/docker/cp-test_ha-759753-m02_ha-759753.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m02:/home/docker/cp-test.txt ha-759753-m03:/home/docker/cp-test_ha-759753-m02_ha-759753-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m03 "sudo cat /home/docker/cp-test_ha-759753-m02_ha-759753-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m02:/home/docker/cp-test.txt ha-759753-m04:/home/docker/cp-test_ha-759753-m02_ha-759753-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m04 "sudo cat /home/docker/cp-test_ha-759753-m02_ha-759753-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp testdata/cp-test.txt ha-759753-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2506861656/001/cp-test_ha-759753-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m03:/home/docker/cp-test.txt ha-759753:/home/docker/cp-test_ha-759753-m03_ha-759753.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753 "sudo cat /home/docker/cp-test_ha-759753-m03_ha-759753.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m03:/home/docker/cp-test.txt ha-759753-m02:/home/docker/cp-test_ha-759753-m03_ha-759753-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m02 "sudo cat /home/docker/cp-test_ha-759753-m03_ha-759753-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m03:/home/docker/cp-test.txt ha-759753-m04:/home/docker/cp-test_ha-759753-m03_ha-759753-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m04 "sudo cat /home/docker/cp-test_ha-759753-m03_ha-759753-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp testdata/cp-test.txt ha-759753-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2506861656/001/cp-test_ha-759753-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m04:/home/docker/cp-test.txt ha-759753:/home/docker/cp-test_ha-759753-m04_ha-759753.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753 "sudo cat /home/docker/cp-test_ha-759753-m04_ha-759753.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m04:/home/docker/cp-test.txt ha-759753-m02:/home/docker/cp-test_ha-759753-m04_ha-759753-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m02 "sudo cat /home/docker/cp-test_ha-759753-m04_ha-759753-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 cp ha-759753-m04:/home/docker/cp-test.txt ha-759753-m03:/home/docker/cp-test_ha-759753-m04_ha-759753-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 ssh -n ha-759753-m03 "sudo cat /home/docker/cp-test_ha-759753-m04_ha-759753-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (84.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 node stop m02 --alsologtostderr -v 5
E1217 19:40:57.461774    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:41:38.424458    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:41:49.529227    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 node stop m02 --alsologtostderr -v 5: (1m24.337740285s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5: exit status 7 (507.648421ms)

                                                
                                                
-- stdout --
	ha-759753
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-759753-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-759753-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-759753-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:42:13.964148   20482 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:42:13.964271   20482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:42:13.964280   20482 out.go:374] Setting ErrFile to fd 2...
	I1217 19:42:13.964284   20482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:42:13.964471   20482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:42:13.964658   20482 out.go:368] Setting JSON to false
	I1217 19:42:13.964687   20482 mustload.go:66] Loading cluster: ha-759753
	I1217 19:42:13.964805   20482 notify.go:221] Checking for updates...
	I1217 19:42:13.965112   20482 config.go:182] Loaded profile config "ha-759753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:42:13.965355   20482 status.go:174] checking status of ha-759753 ...
	I1217 19:42:13.967994   20482 status.go:371] ha-759753 host status = "Running" (err=<nil>)
	I1217 19:42:13.968016   20482 host.go:66] Checking if "ha-759753" exists ...
	I1217 19:42:13.970480   20482 main.go:143] libmachine: domain ha-759753 has defined MAC address 52:54:00:1c:b7:64 in network mk-ha-759753
	I1217 19:42:13.970927   20482 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1c:b7:64", ip: ""} in network mk-ha-759753: {Iface:virbr1 ExpiryTime:2025-12-17 20:36:26 +0000 UTC Type:0 Mac:52:54:00:1c:b7:64 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-759753 Clientid:01:52:54:00:1c:b7:64}
	I1217 19:42:13.970958   20482 main.go:143] libmachine: domain ha-759753 has defined IP address 192.168.39.77 and MAC address 52:54:00:1c:b7:64 in network mk-ha-759753
	I1217 19:42:13.971097   20482 host.go:66] Checking if "ha-759753" exists ...
	I1217 19:42:13.971311   20482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:42:13.973382   20482 main.go:143] libmachine: domain ha-759753 has defined MAC address 52:54:00:1c:b7:64 in network mk-ha-759753
	I1217 19:42:13.973724   20482 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1c:b7:64", ip: ""} in network mk-ha-759753: {Iface:virbr1 ExpiryTime:2025-12-17 20:36:26 +0000 UTC Type:0 Mac:52:54:00:1c:b7:64 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-759753 Clientid:01:52:54:00:1c:b7:64}
	I1217 19:42:13.973746   20482 main.go:143] libmachine: domain ha-759753 has defined IP address 192.168.39.77 and MAC address 52:54:00:1c:b7:64 in network mk-ha-759753
	I1217 19:42:13.973864   20482 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/ha-759753/id_rsa Username:docker}
	I1217 19:42:14.060843   20482 ssh_runner.go:195] Run: systemctl --version
	I1217 19:42:14.067982   20482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:42:14.086578   20482 kubeconfig.go:125] found "ha-759753" server: "https://192.168.39.254:8443"
	I1217 19:42:14.086668   20482 api_server.go:166] Checking apiserver status ...
	I1217 19:42:14.086750   20482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:42:14.109060   20482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1404/cgroup
	W1217 19:42:14.122291   20482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1404/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:42:14.122350   20482 ssh_runner.go:195] Run: ls
	I1217 19:42:14.130431   20482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 19:42:14.136465   20482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 19:42:14.136490   20482 status.go:463] ha-759753 apiserver status = Running (err=<nil>)
	I1217 19:42:14.136502   20482 status.go:176] ha-759753 status: &{Name:ha-759753 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:42:14.136545   20482 status.go:174] checking status of ha-759753-m02 ...
	I1217 19:42:14.138183   20482 status.go:371] ha-759753-m02 host status = "Stopped" (err=<nil>)
	I1217 19:42:14.138203   20482 status.go:384] host is not running, skipping remaining checks
	I1217 19:42:14.138208   20482 status.go:176] ha-759753-m02 status: &{Name:ha-759753-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:42:14.138222   20482 status.go:174] checking status of ha-759753-m03 ...
	I1217 19:42:14.139414   20482 status.go:371] ha-759753-m03 host status = "Running" (err=<nil>)
	I1217 19:42:14.139434   20482 host.go:66] Checking if "ha-759753-m03" exists ...
	I1217 19:42:14.141461   20482 main.go:143] libmachine: domain ha-759753-m03 has defined MAC address 52:54:00:ba:6b:97 in network mk-ha-759753
	I1217 19:42:14.141833   20482 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:6b:97", ip: ""} in network mk-ha-759753: {Iface:virbr1 ExpiryTime:2025-12-17 20:38:39 +0000 UTC Type:0 Mac:52:54:00:ba:6b:97 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-759753-m03 Clientid:01:52:54:00:ba:6b:97}
	I1217 19:42:14.141854   20482 main.go:143] libmachine: domain ha-759753-m03 has defined IP address 192.168.39.46 and MAC address 52:54:00:ba:6b:97 in network mk-ha-759753
	I1217 19:42:14.142024   20482 host.go:66] Checking if "ha-759753-m03" exists ...
	I1217 19:42:14.142237   20482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:42:14.144208   20482 main.go:143] libmachine: domain ha-759753-m03 has defined MAC address 52:54:00:ba:6b:97 in network mk-ha-759753
	I1217 19:42:14.144561   20482 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:6b:97", ip: ""} in network mk-ha-759753: {Iface:virbr1 ExpiryTime:2025-12-17 20:38:39 +0000 UTC Type:0 Mac:52:54:00:ba:6b:97 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-759753-m03 Clientid:01:52:54:00:ba:6b:97}
	I1217 19:42:14.144581   20482 main.go:143] libmachine: domain ha-759753-m03 has defined IP address 192.168.39.46 and MAC address 52:54:00:ba:6b:97 in network mk-ha-759753
	I1217 19:42:14.144703   20482 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/ha-759753-m03/id_rsa Username:docker}
	I1217 19:42:14.231096   20482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:42:14.249421   20482 kubeconfig.go:125] found "ha-759753" server: "https://192.168.39.254:8443"
	I1217 19:42:14.249450   20482 api_server.go:166] Checking apiserver status ...
	I1217 19:42:14.249496   20482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:42:14.269682   20482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1787/cgroup
	W1217 19:42:14.281456   20482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1787/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:42:14.281517   20482 ssh_runner.go:195] Run: ls
	I1217 19:42:14.286434   20482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 19:42:14.291016   20482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 19:42:14.291035   20482 status.go:463] ha-759753-m03 apiserver status = Running (err=<nil>)
	I1217 19:42:14.291043   20482 status.go:176] ha-759753-m03 status: &{Name:ha-759753-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:42:14.291056   20482 status.go:174] checking status of ha-759753-m04 ...
	I1217 19:42:14.292580   20482 status.go:371] ha-759753-m04 host status = "Running" (err=<nil>)
	I1217 19:42:14.292600   20482 host.go:66] Checking if "ha-759753-m04" exists ...
	I1217 19:42:14.295288   20482 main.go:143] libmachine: domain ha-759753-m04 has defined MAC address 52:54:00:e2:07:c0 in network mk-ha-759753
	I1217 19:42:14.295732   20482 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:07:c0", ip: ""} in network mk-ha-759753: {Iface:virbr1 ExpiryTime:2025-12-17 20:40:09 +0000 UTC Type:0 Mac:52:54:00:e2:07:c0 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-759753-m04 Clientid:01:52:54:00:e2:07:c0}
	I1217 19:42:14.295759   20482 main.go:143] libmachine: domain ha-759753-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:e2:07:c0 in network mk-ha-759753
	I1217 19:42:14.295897   20482 host.go:66] Checking if "ha-759753-m04" exists ...
	I1217 19:42:14.296069   20482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:42:14.298298   20482 main.go:143] libmachine: domain ha-759753-m04 has defined MAC address 52:54:00:e2:07:c0 in network mk-ha-759753
	I1217 19:42:14.298748   20482 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:07:c0", ip: ""} in network mk-ha-759753: {Iface:virbr1 ExpiryTime:2025-12-17 20:40:09 +0000 UTC Type:0 Mac:52:54:00:e2:07:c0 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-759753-m04 Clientid:01:52:54:00:e2:07:c0}
	I1217 19:42:14.298770   20482 main.go:143] libmachine: domain ha-759753-m04 has defined IP address 192.168.39.251 and MAC address 52:54:00:e2:07:c0 in network mk-ha-759753
	I1217 19:42:14.298895   20482 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/ha-759753-m04/id_rsa Username:docker}
	I1217 19:42:14.390841   20482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:42:14.409846   20482 status.go:176] ha-759753-m04 status: &{Name:ha-759753-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (84.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 node start m02 --alsologtostderr -v 5
E1217 19:42:17.233612    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 node start m02 --alsologtostderr -v 5: (31.457611714s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 stop --alsologtostderr -v 5
E1217 19:43:00.346338    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:16.200223    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:44:39.270464    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:45:16.485218    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:45:44.191768    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:46:49.531218    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 stop --alsologtostderr -v 5: (4m22.898174506s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 start --wait true --alsologtostderr -v 5
E1217 19:48:16.195632    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 start --wait true --alsologtostderr -v 5: (1m55.819676914s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 node delete m03 --alsologtostderr -v 5: (17.521540687s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (257.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 stop --alsologtostderr -v 5
E1217 19:50:16.485381    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:51:49.530907    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:53:12.597490    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:53:16.197495    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 stop --alsologtostderr -v 5: (4m17.054355085s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5: exit status 7 (62.750779ms)

                                                
                                                
-- stdout --
	ha-759753
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-759753-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-759753-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:53:42.968647   23773 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:53:42.969009   23773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:53:42.969017   23773 out.go:374] Setting ErrFile to fd 2...
	I1217 19:53:42.969021   23773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:53:42.969210   23773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 19:53:42.969373   23773 out.go:368] Setting JSON to false
	I1217 19:53:42.969401   23773 mustload.go:66] Loading cluster: ha-759753
	I1217 19:53:42.969433   23773 notify.go:221] Checking for updates...
	I1217 19:53:42.969795   23773 config.go:182] Loaded profile config "ha-759753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:53:42.969808   23773 status.go:174] checking status of ha-759753 ...
	I1217 19:53:42.972588   23773 status.go:371] ha-759753 host status = "Stopped" (err=<nil>)
	I1217 19:53:42.972605   23773 status.go:384] host is not running, skipping remaining checks
	I1217 19:53:42.972612   23773 status.go:176] ha-759753 status: &{Name:ha-759753 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:53:42.972634   23773 status.go:174] checking status of ha-759753-m02 ...
	I1217 19:53:42.973868   23773 status.go:371] ha-759753-m02 host status = "Stopped" (err=<nil>)
	I1217 19:53:42.973881   23773 status.go:384] host is not running, skipping remaining checks
	I1217 19:53:42.973887   23773 status.go:176] ha-759753-m02 status: &{Name:ha-759753-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:53:42.973901   23773 status.go:174] checking status of ha-759753-m04 ...
	I1217 19:53:42.975002   23773 status.go:371] ha-759753-m04 host status = "Stopped" (err=<nil>)
	I1217 19:53:42.975015   23773 status.go:384] host is not running, skipping remaining checks
	I1217 19:53:42.975019   23773 status.go:176] ha-759753-m04 status: &{Name:ha-759753-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (257.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (86.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m25.653292528s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (86.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 node add --control-plane --alsologtostderr -v 5
E1217 19:55:16.485302    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-759753 node add --control-plane --alsologtostderr -v 5: (1m22.163702379s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-759753 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-687739 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1217 19:56:39.553878    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:56:49.529736    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-687739 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.865805654s)
--- PASS: TestJSONOutput/start/Command (82.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-687739 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-687739 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-687739 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-687739 --output=json --user=testUser: (7.428302726s)
--- PASS: TestJSONOutput/stop/Command (7.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-906228 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-906228 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.593164ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"80e265ca-a7a5-4392-a841-2a1030f47d9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-906228] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b34f10b-9662-4d18-9cc3-0a95ef350d21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22186"}}
	{"specversion":"1.0","id":"afab201d-528d-4002-b707-2b2f38a5213d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f9770161-3f5e-4e38-a7bb-a62ba3a78701","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig"}}
	{"specversion":"1.0","id":"b7359e98-a286-4519-8578-59588939e52d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube"}}
	{"specversion":"1.0","id":"304e17a3-96c2-41fa-99dd-bf10ef20ad88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"61975b01-7a40-47ba-a2b1-5709b971897e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fc0c7e0e-471b-42cc-863a-39f9b26adc3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-906228" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-906228
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (79.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-933941 --driver=kvm2  --container-runtime=crio
E1217 19:58:16.200654    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-933941 --driver=kvm2  --container-runtime=crio: (38.661034431s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-939850 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-939850 --driver=kvm2  --container-runtime=crio: (38.294366817s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-933941
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-939850
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-939850" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-939850
helpers_test.go:176: Cleaning up "first-933941" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-933941
--- PASS: TestMinikubeProfile (79.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-839174 --memory=3072 --mount-string /tmp/TestMountStartserial1507744773/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-839174 --memory=3072 --mount-string /tmp/TestMountStartserial1507744773/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.603555694s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.60s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-643742 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1217 20:00:16.485590    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:01:19.271986    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-643742 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.904668077s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-643742 -- rollout status deployment/busybox: (4.757337787s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-64x4q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-ztdkt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-64x4q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-ztdkt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-64x4q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-ztdkt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-64x4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-64x4q -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-ztdkt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-643742 -- exec busybox-7b57f96db7-ztdkt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-643742 -v=5 --alsologtostderr
E1217 20:01:49.529345    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-643742 -v=5 --alsologtostderr: (41.768332124s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-643742 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp testdata/cp-test.txt multinode-643742:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp multinode-643742:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile14696638/001/cp-test_multinode-643742.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp multinode-643742:/home/docker/cp-test.txt multinode-643742-m02:/home/docker/cp-test_multinode-643742_multinode-643742-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m02 "sudo cat /home/docker/cp-test_multinode-643742_multinode-643742-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp multinode-643742:/home/docker/cp-test.txt multinode-643742-m03:/home/docker/cp-test_multinode-643742_multinode-643742-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m03 "sudo cat /home/docker/cp-test_multinode-643742_multinode-643742-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp testdata/cp-test.txt multinode-643742-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp multinode-643742-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile14696638/001/cp-test_multinode-643742-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp multinode-643742-m02:/home/docker/cp-test.txt multinode-643742:/home/docker/cp-test_multinode-643742-m02_multinode-643742.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742 "sudo cat /home/docker/cp-test_multinode-643742-m02_multinode-643742.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp multinode-643742-m02:/home/docker/cp-test.txt multinode-643742-m03:/home/docker/cp-test_multinode-643742-m02_multinode-643742-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m03 "sudo cat /home/docker/cp-test_multinode-643742-m02_multinode-643742-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp testdata/cp-test.txt multinode-643742-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp multinode-643742-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile14696638/001/cp-test_multinode-643742-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp multinode-643742-m03:/home/docker/cp-test.txt multinode-643742:/home/docker/cp-test_multinode-643742-m03_multinode-643742.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742 "sudo cat /home/docker/cp-test_multinode-643742-m03_multinode-643742.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 cp multinode-643742-m03:/home/docker/cp-test.txt multinode-643742-m02:/home/docker/cp-test_multinode-643742-m03_multinode-643742-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 ssh -n multinode-643742-m02 "sudo cat /home/docker/cp-test_multinode-643742-m03_multinode-643742-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-643742 node stop m03: (1.624529504s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-643742 status: exit status 7 (333.001999ms)

                                                
                                                
-- stdout --
	multinode-643742
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-643742-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-643742-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-643742 status --alsologtostderr: exit status 7 (332.648972ms)

                                                
                                                
-- stdout --
	multinode-643742
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-643742-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-643742-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:02:28.739541   28836 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:02:28.739775   28836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:28.739783   28836 out.go:374] Setting ErrFile to fd 2...
	I1217 20:02:28.739787   28836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:28.739987   28836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:02:28.740142   28836 out.go:368] Setting JSON to false
	I1217 20:02:28.740169   28836 mustload.go:66] Loading cluster: multinode-643742
	I1217 20:02:28.740267   28836 notify.go:221] Checking for updates...
	I1217 20:02:28.740497   28836 config.go:182] Loaded profile config "multinode-643742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:28.740519   28836 status.go:174] checking status of multinode-643742 ...
	I1217 20:02:28.742627   28836 status.go:371] multinode-643742 host status = "Running" (err=<nil>)
	I1217 20:02:28.742644   28836 host.go:66] Checking if "multinode-643742" exists ...
	I1217 20:02:28.745178   28836 main.go:143] libmachine: domain multinode-643742 has defined MAC address 52:54:00:a1:79:86 in network mk-multinode-643742
	I1217 20:02:28.745621   28836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:79:86", ip: ""} in network mk-multinode-643742: {Iface:virbr1 ExpiryTime:2025-12-17 21:00:05 +0000 UTC Type:0 Mac:52:54:00:a1:79:86 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-643742 Clientid:01:52:54:00:a1:79:86}
	I1217 20:02:28.745645   28836 main.go:143] libmachine: domain multinode-643742 has defined IP address 192.168.39.190 and MAC address 52:54:00:a1:79:86 in network mk-multinode-643742
	I1217 20:02:28.745793   28836 host.go:66] Checking if "multinode-643742" exists ...
	I1217 20:02:28.746015   28836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:02:28.748205   28836 main.go:143] libmachine: domain multinode-643742 has defined MAC address 52:54:00:a1:79:86 in network mk-multinode-643742
	I1217 20:02:28.748593   28836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:79:86", ip: ""} in network mk-multinode-643742: {Iface:virbr1 ExpiryTime:2025-12-17 21:00:05 +0000 UTC Type:0 Mac:52:54:00:a1:79:86 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-643742 Clientid:01:52:54:00:a1:79:86}
	I1217 20:02:28.748623   28836 main.go:143] libmachine: domain multinode-643742 has defined IP address 192.168.39.190 and MAC address 52:54:00:a1:79:86 in network mk-multinode-643742
	I1217 20:02:28.748782   28836 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/multinode-643742/id_rsa Username:docker}
	I1217 20:02:28.831166   28836 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:28.839448   28836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:28.857415   28836 kubeconfig.go:125] found "multinode-643742" server: "https://192.168.39.190:8443"
	I1217 20:02:28.857455   28836 api_server.go:166] Checking apiserver status ...
	I1217 20:02:28.857503   28836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:02:28.878714   28836 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W1217 20:02:28.892075   28836 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:02:28.892142   28836 ssh_runner.go:195] Run: ls
	I1217 20:02:28.897813   28836 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I1217 20:02:28.903912   28836 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I1217 20:02:28.903935   28836 status.go:463] multinode-643742 apiserver status = Running (err=<nil>)
	I1217 20:02:28.903944   28836 status.go:176] multinode-643742 status: &{Name:multinode-643742 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 20:02:28.903959   28836 status.go:174] checking status of multinode-643742-m02 ...
	I1217 20:02:28.905545   28836 status.go:371] multinode-643742-m02 host status = "Running" (err=<nil>)
	I1217 20:02:28.905565   28836 host.go:66] Checking if "multinode-643742-m02" exists ...
	I1217 20:02:28.908322   28836 main.go:143] libmachine: domain multinode-643742-m02 has defined MAC address 52:54:00:c6:95:c8 in network mk-multinode-643742
	I1217 20:02:28.908761   28836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:95:c8", ip: ""} in network mk-multinode-643742: {Iface:virbr1 ExpiryTime:2025-12-17 21:01:02 +0000 UTC Type:0 Mac:52:54:00:c6:95:c8 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-643742-m02 Clientid:01:52:54:00:c6:95:c8}
	I1217 20:02:28.908794   28836 main.go:143] libmachine: domain multinode-643742-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:c6:95:c8 in network mk-multinode-643742
	I1217 20:02:28.908957   28836 host.go:66] Checking if "multinode-643742-m02" exists ...
	I1217 20:02:28.909202   28836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:02:28.911423   28836 main.go:143] libmachine: domain multinode-643742-m02 has defined MAC address 52:54:00:c6:95:c8 in network mk-multinode-643742
	I1217 20:02:28.911822   28836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:95:c8", ip: ""} in network mk-multinode-643742: {Iface:virbr1 ExpiryTime:2025-12-17 21:01:02 +0000 UTC Type:0 Mac:52:54:00:c6:95:c8 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-643742-m02 Clientid:01:52:54:00:c6:95:c8}
	I1217 20:02:28.911843   28836 main.go:143] libmachine: domain multinode-643742-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:c6:95:c8 in network mk-multinode-643742
	I1217 20:02:28.912003   28836 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/multinode-643742-m02/id_rsa Username:docker}
	I1217 20:02:28.992619   28836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:29.009305   28836 status.go:176] multinode-643742-m02 status: &{Name:multinode-643742-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 20:02:29.009343   28836 status.go:174] checking status of multinode-643742-m03 ...
	I1217 20:02:29.010909   28836 status.go:371] multinode-643742-m03 host status = "Stopped" (err=<nil>)
	I1217 20:02:29.010929   28836 status.go:384] host is not running, skipping remaining checks
	I1217 20:02:29.010934   28836 status.go:176] multinode-643742-m03 status: &{Name:multinode-643742-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-643742 node start m03 -v=5 --alsologtostderr: (40.33617717s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (285.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-643742
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-643742
E1217 20:03:16.198824    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:05:16.485986    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-643742: (2m39.046885797s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-643742 --wait=true -v=5 --alsologtostderr
E1217 20:06:49.529728    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-643742 --wait=true -v=5 --alsologtostderr: (2m6.113122828s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-643742
--- PASS: TestMultiNode/serial/RestartKeepsNodes (285.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-643742 node delete m03: (2.168229437s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (170.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 stop
E1217 20:08:16.200387    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:09:52.601180    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:10:16.485345    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-643742 stop: (2m50.630891959s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-643742 status: exit status 7 (60.098744ms)

                                                
                                                
-- stdout --
	multinode-643742
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-643742-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-643742 status --alsologtostderr: exit status 7 (59.192773ms)

                                                
                                                
-- stdout --
	multinode-643742
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-643742-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:10:48.518829   31186 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:10:48.519061   31186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:10:48.519070   31186 out.go:374] Setting ErrFile to fd 2...
	I1217 20:10:48.519074   31186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:10:48.519233   31186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:10:48.519398   31186 out.go:368] Setting JSON to false
	I1217 20:10:48.519424   31186 mustload.go:66] Loading cluster: multinode-643742
	I1217 20:10:48.519548   31186 notify.go:221] Checking for updates...
	I1217 20:10:48.519844   31186 config.go:182] Loaded profile config "multinode-643742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:10:48.519859   31186 status.go:174] checking status of multinode-643742 ...
	I1217 20:10:48.521996   31186 status.go:371] multinode-643742 host status = "Stopped" (err=<nil>)
	I1217 20:10:48.522010   31186 status.go:384] host is not running, skipping remaining checks
	I1217 20:10:48.522015   31186 status.go:176] multinode-643742 status: &{Name:multinode-643742 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 20:10:48.522030   31186 status.go:174] checking status of multinode-643742-m02 ...
	I1217 20:10:48.523086   31186 status.go:371] multinode-643742-m02 host status = "Stopped" (err=<nil>)
	I1217 20:10:48.523098   31186 status.go:384] host is not running, skipping remaining checks
	I1217 20:10:48.523102   31186 status.go:176] multinode-643742-m02 status: &{Name:multinode-643742-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (170.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (90.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-643742 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1217 20:11:49.528923    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-643742 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m30.305422663s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-643742 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (90.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-643742
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-643742-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-643742-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (72.20212ms)

                                                
                                                
-- stdout --
	* [multinode-643742-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-643742-m02' is duplicated with machine name 'multinode-643742-m02' in profile 'multinode-643742'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-643742-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-643742-m03 --driver=kvm2  --container-runtime=crio: (37.242608319s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-643742
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-643742: exit status 80 (194.238334ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-643742 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-643742-m03 already exists in multinode-643742-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-643742-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.38s)

                                                
                                    
x
+
TestScheduledStopUnix (108.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-414965 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-414965 --memory=3072 --driver=kvm2  --container-runtime=crio: (36.488890679s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-414965 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 20:15:54.662733   33460 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:15:54.663006   33460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:54.663017   33460 out.go:374] Setting ErrFile to fd 2...
	I1217 20:15:54.663021   33460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:54.663190   33460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:15:54.663418   33460 out.go:368] Setting JSON to false
	I1217 20:15:54.663499   33460 mustload.go:66] Loading cluster: scheduled-stop-414965
	I1217 20:15:54.663807   33460 config.go:182] Loaded profile config "scheduled-stop-414965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:15:54.663871   33460 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/config.json ...
	I1217 20:15:54.664043   33460 mustload.go:66] Loading cluster: scheduled-stop-414965
	I1217 20:15:54.664146   33460 config.go:182] Loaded profile config "scheduled-stop-414965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-414965 -n scheduled-stop-414965
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-414965 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 20:15:54.957823   33504 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:15:54.958058   33504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:54.958066   33504 out.go:374] Setting ErrFile to fd 2...
	I1217 20:15:54.958069   33504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:54.958231   33504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:15:54.958437   33504 out.go:368] Setting JSON to false
	I1217 20:15:54.958643   33504 daemonize_unix.go:73] killing process 33494 as it is an old scheduled stop
	I1217 20:15:54.958738   33504 mustload.go:66] Loading cluster: scheduled-stop-414965
	I1217 20:15:54.959129   33504 config.go:182] Loaded profile config "scheduled-stop-414965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:15:54.959213   33504 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/config.json ...
	I1217 20:15:54.959403   33504 mustload.go:66] Loading cluster: scheduled-stop-414965
	I1217 20:15:54.959493   33504 config.go:182] Loaded profile config "scheduled-stop-414965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 20:15:54.964941    7531 retry.go:31] will retry after 130.509µs: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.966069    7531 retry.go:31] will retry after 195.348µs: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.967209    7531 retry.go:31] will retry after 221.192µs: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.968356    7531 retry.go:31] will retry after 476.107µs: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.969490    7531 retry.go:31] will retry after 277.526µs: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.970600    7531 retry.go:31] will retry after 866.393µs: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.971725    7531 retry.go:31] will retry after 1.687822ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.973939    7531 retry.go:31] will retry after 1.053153ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.975092    7531 retry.go:31] will retry after 3.130773ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.978297    7531 retry.go:31] will retry after 4.076545ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.982433    7531 retry.go:31] will retry after 7.554296ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:54.990659    7531 retry.go:31] will retry after 12.664577ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:55.003904    7531 retry.go:31] will retry after 19.040989ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:55.023088    7531 retry.go:31] will retry after 18.05427ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:55.041272    7531 retry.go:31] will retry after 21.534598ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
I1217 20:15:55.063580    7531 retry.go:31] will retry after 61.979774ms: open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-414965 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-414965 -n scheduled-stop-414965
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-414965
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-414965 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 20:16:20.703879   33651 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:16:20.704104   33651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:16:20.704112   33651 out.go:374] Setting ErrFile to fd 2...
	I1217 20:16:20.704115   33651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:16:20.704303   33651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:16:20.704617   33651 out.go:368] Setting JSON to false
	I1217 20:16:20.704690   33651 mustload.go:66] Loading cluster: scheduled-stop-414965
	I1217 20:16:20.704979   33651 config.go:182] Loaded profile config "scheduled-stop-414965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:16:20.705038   33651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/scheduled-stop-414965/config.json ...
	I1217 20:16:20.705217   33651 mustload.go:66] Loading cluster: scheduled-stop-414965
	I1217 20:16:20.705306   33651 config.go:182] Loaded profile config "scheduled-stop-414965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1217 20:16:49.531059    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-414965
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-414965: exit status 7 (58.639395ms)

                                                
                                                
-- stdout --
	scheduled-stop-414965
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-414965 -n scheduled-stop-414965
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-414965 -n scheduled-stop-414965: exit status 7 (57.981375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-414965" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-414965
--- PASS: TestScheduledStopUnix (108.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (394.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3861967457 start -p running-upgrade-824542 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3861967457 start -p running-upgrade-824542 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m25.636464993s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-824542 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-824542 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m4.040264792s)
helpers_test.go:176: Cleaning up "running-upgrade-824542" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-824542
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-824542: (1.002054751s)
--- PASS: TestRunningBinaryUpgrade (394.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (130.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.053606725s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-813074
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-813074: (1.90118247s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-813074 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-813074 status --format={{.Host}}: exit status 7 (65.542187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.835573678s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-813074 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.528463ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-813074] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-813074
	    minikube start -p kubernetes-upgrade-813074 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8130742 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-813074 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1217 20:21:49.529268    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-813074 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.793586835s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-813074" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-813074
--- PASS: TestKubernetesUpgrade (130.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680060 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-680060 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (94.017234ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-680060] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestISOImage/Setup (19.27s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-867309 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-867309 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.270265422s)
--- PASS: TestISOImage/Setup (19.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680060 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-680060 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.101335183s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-680060 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-698465 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-698465 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (117.13606ms)

                                                
                                                
-- stdout --
	* [false-698465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:17:09.664997   34741 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:17:09.665323   34741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:17:09.665337   34741 out.go:374] Setting ErrFile to fd 2...
	I1217 20:17:09.665344   34741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:17:09.665688   34741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
	I1217 20:17:09.666358   34741 out.go:368] Setting JSON to false
	I1217 20:17:09.667710   34741 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3569,"bootTime":1765999061,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:17:09.667778   34741 start.go:143] virtualization: kvm guest
	I1217 20:17:09.670786   34741 out.go:179] * [false-698465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:17:09.671963   34741 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:17:09.671962   34741 notify.go:221] Checking for updates...
	I1217 20:17:09.673200   34741 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:17:09.674675   34741 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
	I1217 20:17:09.675825   34741 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
	I1217 20:17:09.676854   34741 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:17:09.677869   34741 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:17:09.679505   34741 config.go:182] Loaded profile config "NoKubernetes-680060": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:17:09.679632   34741 config.go:182] Loaded profile config "guest-867309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1217 20:17:09.679713   34741 config.go:182] Loaded profile config "offline-crio-597150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:17:09.679782   34741 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:17:09.714583   34741 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 20:17:09.715640   34741 start.go:309] selected driver: kvm2
	I1217 20:17:09.715656   34741 start.go:927] validating driver "kvm2" against <nil>
	I1217 20:17:09.715665   34741 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:17:09.717431   34741 out.go:203] 
	W1217 20:17:09.718598   34741 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1217 20:17:09.719681   34741 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-698465 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-698465" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-698465

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-698465"

                                                
                                                
----------------------- debugLogs end: false-698465 [took: 3.095983328s] --------------------------------
helpers_test.go:176: Cleaning up "false-698465" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-698465
--- PASS: TestNetworkPlugins/group/false (3.36s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680060 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-680060 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (31.291757655s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-680060 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-680060 status -o json: exit status 2 (211.757175ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-680060","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-680060
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (59.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680060 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-680060 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.980971789s)
--- PASS: TestNoKubernetes/serial/Start (59.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-680060 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-680060 "sudo systemctl is-active --quiet service kubelet": exit status 1 (172.393531ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-680060
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-680060: (1.33239079s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (56.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680060 --driver=kvm2  --container-runtime=crio
E1217 20:20:16.485472    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-680060 --driver=kvm2  --container-runtime=crio: (56.736701771s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (56.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-680060 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-680060 "sudo systemctl is-active --quiet service kubelet": exit status 1 (167.382613ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (85.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3097575753 start -p stopped-upgrade-897195 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3097575753 start -p stopped-upgrade-897195 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (53.661340662s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3097575753 -p stopped-upgrade-897195 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3097575753 -p stopped-upgrade-897195 stop: (1.861460575s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-897195 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-897195 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.176728886s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (85.70s)

                                                
                                    
x
+
TestPause/serial/Start (63.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-722044 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-722044 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m3.230134855s)
--- PASS: TestPause/serial/Start (63.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-897195
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m21.667416118s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.106250141s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-698465 "pgrep -a kubelet"
I1217 20:23:51.050801    7531 config.go:182] Loaded profile config "auto-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-698465 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-kg4dw" [30f4e78f-10a5-4ade-90f2-961dce4d5d1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-kg4dw" [30f4e78f-10a5-4ade-90f2-961dce4d5d1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003744418s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-698465 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m15.825042645s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m32.227000594s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-svzj5" [297e6017-8f4a-40d3-9ad5-64760ec9f39c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004387759s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-698465 "pgrep -a kubelet"
I1217 20:24:47.988291    7531 config.go:182] Loaded profile config "kindnet-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-698465 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nhf6x" [9f5901df-324f-40b9-aa3b-ed1145d8cc23] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-nhf6x" [9f5901df-324f-40b9-aa3b-ed1145d8cc23] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007714305s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-698465 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1217 20:25:16.486070    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m28.761555115s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-zm6f5" [94eae18b-00c3-4592-ac66-1e30d827d49b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-zm6f5" [94eae18b-00c3-4592-ac66-1e30d827d49b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0107903s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-698465 "pgrep -a kubelet"
I1217 20:25:34.528787    7531 config.go:182] Loaded profile config "calico-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-698465 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-c2jrb" [5668de09-3e9f-4b03-a109-2d81ac1612e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-c2jrb" [5668de09-3e9f-4b03-a109-2d81ac1612e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005406977s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m19.82198049s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-698465 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-698465 "pgrep -a kubelet"
I1217 20:25:51.460843    7531 config.go:182] Loaded profile config "custom-flannel-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-698465 replace --force -f testdata/netcat-deployment.yaml
I1217 20:25:51.937164    7531 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1217 20:25:51.965252    7531 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qz5rc" [5faac36d-1dee-4014-9a56-1efce4991cd4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qz5rc" [5faac36d-1dee-4014-9a56-1efce4991cd4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.050691199s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-698465 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m30.263470408s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-698465 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (99.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-433699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1217 20:26:32.602482    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-433699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m39.418818936s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (99.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-698465 "pgrep -a kubelet"
I1217 20:26:44.481689    7531 config.go:182] Loaded profile config "enable-default-cni-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-698465 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-llqcw" [abbaed00-5b1e-4fa9-9521-27b824350490] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 20:26:49.529722    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-llqcw" [abbaed00-5b1e-4fa9-9521-27b824350490] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00468503s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-698465 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-b7pbn" [5522d433-14be-49dc-9b71-250afa6e3d9e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004874946s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-698465 "pgrep -a kubelet"
I1217 20:27:06.901862    7531 config.go:182] Loaded profile config "flannel-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-698465 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fqp57" [0f2ed2de-3a2e-4266-b5b8-b48875caa9ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fqp57" [0f2ed2de-3a2e-4266-b5b8-b48875caa9ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005615295s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-235091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-235091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m40.317535041s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-698465 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-062361 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-062361 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m23.093389433s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-698465 "pgrep -a kubelet"
I1217 20:27:32.857012    7531 config.go:182] Loaded profile config "bridge-698465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-698465 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9xnz2" [6ad3405a-7b1f-4d87-9644-3fce42640b8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9xnz2" [6ad3405a-7b1f-4d87-9644-3fce42640b8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004938401s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-698465 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-698465 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-406603 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-406603 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m26.021029323s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-433699 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0794f669-5ad3-4979-a2ea-0b3f16f2e516] Pending
helpers_test.go:353: "busybox" [0794f669-5ad3-4979-a2ea-0b3f16f2e516] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0794f669-5ad3-4979-a2ea-0b3f16f2e516] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.008138317s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-433699 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-433699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-433699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.317910828s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-433699 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (73.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-433699 --alsologtostderr -v=3
E1217 20:28:16.195802    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:51.302380    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:51.308788    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:51.320165    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:51.341576    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:51.382990    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:51.464762    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-433699 --alsologtostderr -v=3: (1m13.773037319s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (73.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-235091 create -f testdata/busybox.yaml
E1217 20:28:51.626593    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2dd4f478-09c8-4c9a-9a28-953650bc8828] Pending
E1217 20:28:51.948700    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:52.590979    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [2dd4f478-09c8-4c9a-9a28-953650bc8828] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1217 20:28:53.872357    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [2dd4f478-09c8-4c9a-9a28-953650bc8828] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005120284s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-235091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-062361 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5e6460f0-e3f7-4418-8e3b-4f27b2425812] Pending
E1217 20:28:56.434288    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [5e6460f0-e3f7-4418-8e3b-4f27b2425812] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1217 20:29:01.556115    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [5e6460f0-e3f7-4418-8e3b-4f27b2425812] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004152999s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-062361 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-235091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-235091 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (88.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-235091 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-235091 --alsologtostderr -v=3: (1m28.212750802s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (88.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-062361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-062361 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (80.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-062361 --alsologtostderr -v=3
E1217 20:29:11.797447    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-062361 --alsologtostderr -v=3: (1m20.485737126s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (80.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-406603 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7bdc8b49-08a6-4cfa-862e-fd6fe63f9fce] Pending
helpers_test.go:353: "busybox" [7bdc8b49-08a6-4cfa-862e-fd6fe63f9fce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7bdc8b49-08a6-4cfa-862e-fd6fe63f9fce] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005238101s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-406603 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433699 -n old-k8s-version-433699
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433699 -n old-k8s-version-433699: exit status 7 (58.002031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-433699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-433699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1217 20:29:32.279703    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-433699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (47.396646598s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433699 -n old-k8s-version-433699
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-406603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-406603 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (82.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-406603 --alsologtostderr -v=3
E1217 20:29:41.792459    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:41.798887    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:41.810295    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:41.831767    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:41.873214    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:41.954785    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:42.116342    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:42.438211    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:43.080141    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:44.362134    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:46.924012    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:52.046028    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:29:59.557510    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:02.288270    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:13.241003    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-406603 --alsologtostderr -v=3: (1m22.875590726s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (82.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-p7nmt" [1418905e-4f1d-4963-a78b-e4c8e88292e4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1217 20:30:16.485420    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-841762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-p7nmt" [1418905e-4f1d-4963-a78b-e4c8e88292e4] Running
E1217 20:30:22.769872    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.003978313s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-p7nmt" [1418905e-4f1d-4963-a78b-e4c8e88292e4] Running
E1217 20:30:28.338060    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:28.344513    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:28.355954    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:28.377369    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:28.418839    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:28.500759    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:28.662358    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:28.983657    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004803383s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-433699 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-062361 -n embed-certs-062361
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-062361 -n embed-certs-062361: exit status 7 (60.873722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-062361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-062361 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
E1217 20:30:29.625412    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:30.906830    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-062361 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (44.369072283s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-062361 -n embed-certs-062361
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-433699 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-433699 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433699 -n old-k8s-version-433699
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433699 -n old-k8s-version-433699: exit status 2 (229.894144ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-433699 -n old-k8s-version-433699
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-433699 -n old-k8s-version-433699: exit status 2 (218.992824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-433699 --alsologtostderr -v=1
E1217 20:30:33.468655    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433699 -n old-k8s-version-433699
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-433699 -n old-k8s-version-433699
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-235091 -n no-preload-235091
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-235091 -n no-preload-235091: exit status 7 (62.006363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-235091 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (74.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-235091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-235091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m14.328817676s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-235091 -n no-preload-235091
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (74.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (74.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-452892 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1217 20:30:38.590491    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:48.832099    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:51.914670    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:51.921101    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:51.932605    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:51.954050    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:51.995500    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:52.077045    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:52.238420    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:52.560732    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:53.202873    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:54.484912    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:57.047174    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-452892 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m14.319217692s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (74.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-406603 -n default-k8s-diff-port-406603
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-406603 -n default-k8s-diff-port-406603: exit status 7 (71.997213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-406603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (72.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-406603 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
E1217 20:31:02.169361    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:03.731942    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:09.314135    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:12.411466    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-406603 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m12.047444683s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-406603 -n default-k8s-diff-port-406603
E1217 20:32:13.854853    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (72.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-pd45m" [a4b52e78-d342-4ed7-81ef-dc6128dd6536] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004963094s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-pd45m" [a4b52e78-d342-4ed7-81ef-dc6128dd6536] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005615049s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-062361 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-062361 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-062361 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-062361 --alsologtostderr -v=1: (1.06001453s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-062361 -n embed-certs-062361
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-062361 -n embed-certs-062361: exit status 2 (289.601291ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-062361 -n embed-certs-062361
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-062361 -n embed-certs-062361: exit status 2 (283.375335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-062361 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-062361 --alsologtostderr -v=1: (1.064957736s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-062361 -n embed-certs-062361
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-062361 -n embed-certs-062361
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.51s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.22s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.38s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1765965980-22186
iso_test.go:118:   kicbase_version: v0.0.48-1765661130-22141
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: c344550999bcbb78f38b2df057224788bb2d30b2
--- PASS: TestISOImage/VersionJSON (0.38s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.2s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-867309 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.20s)
E1217 20:31:35.162572    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/auto-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:44.706773    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:44.713301    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:44.724994    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:44.746486    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:44.788664    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:44.870170    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:45.031944    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:45.353984    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:45.996188    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-tklrz" [1fb32c4f-88c3-4b0b-89d3-0d9fd89cf508] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1217 20:31:47.277994    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:49.528723    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/functional-345985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:31:49.839699    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-tklrz" [1fb32c4f-88c3-4b0b-89d3-0d9fd89cf508] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.014309772s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-452892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 20:31:50.275776    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/calico-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-452892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.056294025s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (82.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-452892 --alsologtostderr -v=3
E1217 20:31:54.961635    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-452892 --alsologtostderr -v=3: (1m22.491285237s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (82.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-tklrz" [1fb32c4f-88c3-4b0b-89d3-0d9fd89cf508] Running
E1217 20:32:00.693345    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:32:00.700073    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:32:00.711582    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:32:00.733025    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:32:00.774537    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:32:00.856030    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:32:01.018185    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:32:01.340153    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008890657s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-235091 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1217 20:32:01.981726    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-235091 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-235091 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-235091 -n no-preload-235091
E1217 20:32:03.263913    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-235091 -n no-preload-235091: exit status 2 (234.134898ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-235091 -n no-preload-235091
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-235091 -n no-preload-235091: exit status 2 (233.331821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-235091 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-235091 -n no-preload-235091
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-235091 -n no-preload-235091
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-j4z2r" [919aca2f-7725-432b-8220-946eff7a66cd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-j4z2r" [919aca2f-7725-432b-8220-946eff7a66cd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004184366s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-j4z2r" [919aca2f-7725-432b-8220-946eff7a66cd] Running
E1217 20:32:21.189914    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:32:25.654138    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/kindnet-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:32:25.685631    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/enable-default-cni-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004736841s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-406603 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-406603 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-406603 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-406603 -n default-k8s-diff-port-406603
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-406603 -n default-k8s-diff-port-406603: exit status 2 (204.795ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-406603 -n default-k8s-diff-port-406603
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-406603 -n default-k8s-diff-port-406603: exit status 2 (219.611187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-406603 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-406603 -n default-k8s-diff-port-406603
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-406603 -n default-k8s-diff-port-406603
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-452892 -n newest-cni-452892
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-452892 -n newest-cni-452892: exit status 7 (58.997435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-452892 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-452892 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1217 20:33:16.195419    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:33:22.611560    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/old-k8s-version-433699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:33:22.634003    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:33:35.776386    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/custom-flannel-698465/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:33:43.092835    7531 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/old-k8s-version-433699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-452892 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (29.851973416s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-452892 -n newest-cni-452892
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-452892 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-452892 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-452892 --alsologtostderr -v=1: (1.261330456s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-452892 -n newest-cni-452892
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-452892 -n newest-cni-452892: exit status 2 (293.50557ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-452892 -n newest-cni-452892
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-452892 -n newest-cni-452892: exit status 2 (230.620419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-452892 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-452892 -n newest-cni-452892
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-452892 -n newest-cni-452892
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                    

Test skip (52/424)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.32
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
154 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
155 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
156 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
157 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
158 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
159 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
160 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
161 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0.01
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService 0.01
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0.01
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
336 TestChangeNoneUser 0
339 TestScheduledStopWindows 0
341 TestSkaffold 0
343 TestInsufficientStorage 0
347 TestMissingContainerUpgrade 0
353 TestNetworkPlugins/group/kubenet 3.35
362 TestNetworkPlugins/group/cilium 3.59
390 TestStartStop/group/disable-driver-mounts 0.22
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-886556 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-698465 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-698465" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-698465

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-698465"

                                                
                                                
----------------------- debugLogs end: kubenet-698465 [took: 3.187655261s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-698465" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-698465
--- SKIP: TestNetworkPlugins/group/kubenet (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-698465 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-698465" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-698465

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-698465" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-698465"

                                                
                                                
----------------------- debugLogs end: cilium-698465 [took: 3.419276819s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-698465" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-698465
--- SKIP: TestNetworkPlugins/group/cilium (3.59s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-074282" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-074282
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard