Test Report: KVM_Linux_crio 22122

                    
                      022dd2780ab8206ac68153a1ee37fdbcc6da7ccd:2025-12-13:42761
                    
                

Test fail (8/370)

x
+
TestAddons/parallel/Ingress (157.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-685870 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-685870 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-685870 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [9264c705-e985-4103-9edc-eaa92549670d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [9264c705-e985-4103-9edc-eaa92549670d] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007820133s
I1213 13:08:50.293002  135234 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-685870 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.578499235s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-685870 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.155
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-685870 -n addons-685870
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 logs -n 25: (1.181791496s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-059438                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-059438 │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │ 13 Dec 25 13:06 UTC │
	│ start   │ --download-only -p binary-mirror-716159 --alsologtostderr --binary-mirror http://127.0.0.1:33249 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-716159 │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │                     │
	│ delete  │ -p binary-mirror-716159                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-716159 │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │ 13 Dec 25 13:06 UTC │
	│ addons  │ enable dashboard -p addons-685870                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │                     │
	│ addons  │ disable dashboard -p addons-685870                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │                     │
	│ start   │ -p addons-685870 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-685870 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-685870 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ enable headlamp -p addons-685870 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-685870 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-685870 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ ip      │ addons-685870 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-685870 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ ssh     │ addons-685870 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-685870                                                                                                                                                                                                                                                                                                                                                                                         │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-685870 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
	│ addons  │ addons-685870 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:09 UTC │
	│ ssh     │ addons-685870 ssh cat /opt/local-path-provisioner/pvc-ebf86252-4882-4e05-b2c9-1d3fc597ad06_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	│ addons  │ addons-685870 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	│ addons  │ addons-685870 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	│ addons  │ addons-685870 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	│ addons  │ addons-685870 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	│ addons  │ addons-685870 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	│ addons  │ addons-685870 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	│ ip      │ addons-685870 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-685870        │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:11 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:06:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:06:04.923724  136192 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:06:04.923976  136192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:06:04.923985  136192 out.go:374] Setting ErrFile to fd 2...
	I1213 13:06:04.923990  136192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:06:04.924244  136192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 13:06:04.924830  136192 out.go:368] Setting JSON to false
	I1213 13:06:04.925714  136192 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2905,"bootTime":1765628260,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:06:04.925769  136192 start.go:143] virtualization: kvm guest
	I1213 13:06:04.927463  136192 out.go:179] * [addons-685870] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:06:04.928660  136192 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:06:04.928666  136192 notify.go:221] Checking for updates...
	I1213 13:06:04.930717  136192 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:06:04.931857  136192 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 13:06:04.932918  136192 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:06:04.934040  136192 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:06:04.935173  136192 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:06:04.936517  136192 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:06:04.965680  136192 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 13:06:04.966551  136192 start.go:309] selected driver: kvm2
	I1213 13:06:04.966566  136192 start.go:927] validating driver "kvm2" against <nil>
	I1213 13:06:04.966581  136192 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:06:04.967295  136192 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:06:04.967530  136192 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:06:04.967557  136192 cni.go:84] Creating CNI manager for ""
	I1213 13:06:04.967612  136192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:06:04.967632  136192 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 13:06:04.967710  136192 start.go:353] cluster config:
	{Name:addons-685870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1213 13:06:04.967833  136192 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:06:04.969055  136192 out.go:179] * Starting "addons-685870" primary control-plane node in "addons-685870" cluster
	I1213 13:06:04.969972  136192 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:06:04.969998  136192 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:06:04.970005  136192 cache.go:65] Caching tarball of preloaded images
	I1213 13:06:04.970097  136192 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:06:04.970112  136192 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:06:04.970406  136192 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/config.json ...
	I1213 13:06:04.970429  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/config.json: {Name:mk87d25a7add1b61736edadb979d71fef18f2d73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:04.970555  136192 start.go:360] acquireMachinesLock for addons-685870: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 13:06:04.971216  136192 start.go:364] duration metric: took 646.238µs to acquireMachinesLock for "addons-685870"
	I1213 13:06:04.971240  136192 start.go:93] Provisioning new machine with config: &{Name:addons-685870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:06:04.971292  136192 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 13:06:04.973013  136192 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1213 13:06:04.973234  136192 start.go:159] libmachine.API.Create for "addons-685870" (driver="kvm2")
	I1213 13:06:04.973264  136192 client.go:173] LocalClient.Create starting
	I1213 13:06:04.973336  136192 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem
	I1213 13:06:04.995700  136192 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem
	I1213 13:06:05.068319  136192 main.go:143] libmachine: creating domain...
	I1213 13:06:05.068341  136192 main.go:143] libmachine: creating network...
	I1213 13:06:05.069873  136192 main.go:143] libmachine: found existing default network
	I1213 13:06:05.070132  136192 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 13:06:05.070716  136192 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c109a0}
	I1213 13:06:05.070810  136192 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-685870</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 13:06:05.077065  136192 main.go:143] libmachine: creating private network mk-addons-685870 192.168.39.0/24...
	I1213 13:06:05.142482  136192 main.go:143] libmachine: private network mk-addons-685870 192.168.39.0/24 created
	I1213 13:06:05.142796  136192 main.go:143] libmachine: <network>
	  <name>mk-addons-685870</name>
	  <uuid>bfbff2e1-dc1e-4727-b5f5-e11552e7878b</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:02:36:39'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 13:06:05.142833  136192 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870 ...
	I1213 13:06:05.142853  136192 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22122-131207/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso
	I1213 13:06:05.142864  136192 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:06:05.142935  136192 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22122-131207/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22122-131207/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso...
	I1213 13:06:05.440530  136192 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa...
	I1213 13:06:05.628432  136192 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/addons-685870.rawdisk...
	I1213 13:06:05.628476  136192 main.go:143] libmachine: Writing magic tar header
	I1213 13:06:05.628517  136192 main.go:143] libmachine: Writing SSH key tar header
	I1213 13:06:05.628606  136192 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870 ...
	I1213 13:06:05.628661  136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870
	I1213 13:06:05.628703  136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870 (perms=drwx------)
	I1213 13:06:05.628720  136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube/machines
	I1213 13:06:05.628732  136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube/machines (perms=drwxr-xr-x)
	I1213 13:06:05.628743  136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:06:05.628753  136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube (perms=drwxr-xr-x)
	I1213 13:06:05.628764  136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207
	I1213 13:06:05.628773  136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207 (perms=drwxrwxr-x)
	I1213 13:06:05.628782  136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1213 13:06:05.628791  136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 13:06:05.628799  136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1213 13:06:05.628809  136192 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 13:06:05.628818  136192 main.go:143] libmachine: checking permissions on dir: /home
	I1213 13:06:05.628827  136192 main.go:143] libmachine: skipping /home - not owner
	I1213 13:06:05.628832  136192 main.go:143] libmachine: defining domain...
	I1213 13:06:05.630125  136192 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-685870</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/addons-685870.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-685870'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1213 13:06:05.637172  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:0c:44:09 in network default
	I1213 13:06:05.637813  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:05.637832  136192 main.go:143] libmachine: starting domain...
	I1213 13:06:05.637837  136192 main.go:143] libmachine: ensuring networks are active...
	I1213 13:06:05.638554  136192 main.go:143] libmachine: Ensuring network default is active
	I1213 13:06:05.638925  136192 main.go:143] libmachine: Ensuring network mk-addons-685870 is active
	I1213 13:06:05.639535  136192 main.go:143] libmachine: getting domain XML...
	I1213 13:06:05.640521  136192 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-685870</name>
	  <uuid>23167541-60b9-4d48-b988-554cdedf00bd</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/addons-685870.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4c:b9:14'/>
	      <source network='mk-addons-685870'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:0c:44:09'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 13:06:06.924164  136192 main.go:143] libmachine: waiting for domain to start...
	I1213 13:06:06.925652  136192 main.go:143] libmachine: domain is now running
	I1213 13:06:06.925676  136192 main.go:143] libmachine: waiting for IP...
	I1213 13:06:06.926504  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:06.927134  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:06.927157  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:06.927496  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:06.927563  136192 retry.go:31] will retry after 261.089812ms: waiting for domain to come up
	I1213 13:06:07.190003  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:07.190569  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:07.190587  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:07.190907  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:07.190943  136192 retry.go:31] will retry after 306.223214ms: waiting for domain to come up
	I1213 13:06:07.498340  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:07.498783  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:07.498797  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:07.499083  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:07.499118  136192 retry.go:31] will retry after 402.041961ms: waiting for domain to come up
	I1213 13:06:07.902729  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:07.903309  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:07.903327  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:07.903647  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:07.903688  136192 retry.go:31] will retry after 372.890146ms: waiting for domain to come up
	I1213 13:06:08.278127  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:08.278560  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:08.278574  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:08.278821  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:08.278858  136192 retry.go:31] will retry after 744.363927ms: waiting for domain to come up
	I1213 13:06:09.025006  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:09.025602  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:09.025625  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:09.025922  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:09.025973  136192 retry.go:31] will retry after 604.505944ms: waiting for domain to come up
	I1213 13:06:09.631619  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:09.632204  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:09.632231  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:09.632586  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:09.632629  136192 retry.go:31] will retry after 862.011279ms: waiting for domain to come up
	I1213 13:06:10.495743  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:10.496162  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:10.496174  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:10.496404  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:10.496441  136192 retry.go:31] will retry after 1.364913195s: waiting for domain to come up
	I1213 13:06:11.862877  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:11.863382  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:11.863396  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:11.863643  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:11.863680  136192 retry.go:31] will retry after 1.467338749s: waiting for domain to come up
	I1213 13:06:13.333393  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:13.333887  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:13.333901  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:13.334194  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:13.334238  136192 retry.go:31] will retry after 1.655012284s: waiting for domain to come up
	I1213 13:06:14.990676  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:14.991390  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:14.991419  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:14.991827  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:14.991882  136192 retry.go:31] will retry after 2.53356744s: waiting for domain to come up
	I1213 13:06:17.528950  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:17.529591  136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
	I1213 13:06:17.529609  136192 main.go:143] libmachine: trying to list again with source=arp
	I1213 13:06:17.529974  136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
	I1213 13:06:17.530029  136192 retry.go:31] will retry after 3.082423333s: waiting for domain to come up
	I1213 13:06:20.613931  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:20.614573  136192 main.go:143] libmachine: domain addons-685870 has current primary IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:20.614592  136192 main.go:143] libmachine: found domain IP: 192.168.39.155
	I1213 13:06:20.614601  136192 main.go:143] libmachine: reserving static IP address...
	I1213 13:06:20.615143  136192 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-685870", mac: "52:54:00:4c:b9:14", ip: "192.168.39.155"} in network mk-addons-685870
	I1213 13:06:20.808288  136192 main.go:143] libmachine: reserved static IP address 192.168.39.155 for domain addons-685870
	I1213 13:06:20.808311  136192 main.go:143] libmachine: waiting for SSH...
	I1213 13:06:20.808340  136192 main.go:143] libmachine: Getting to WaitForSSH function...
	I1213 13:06:20.811159  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:20.811688  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:20.811716  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:20.811928  136192 main.go:143] libmachine: Using SSH client type: native
	I1213 13:06:20.812208  136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1213 13:06:20.812221  136192 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1213 13:06:20.918867  136192 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:06:20.919302  136192 main.go:143] libmachine: domain creation complete
	I1213 13:06:20.920860  136192 machine.go:94] provisionDockerMachine start ...
	I1213 13:06:20.923046  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:20.923448  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:20.923482  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:20.923651  136192 main.go:143] libmachine: Using SSH client type: native
	I1213 13:06:20.923856  136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1213 13:06:20.923871  136192 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:06:21.030842  136192 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 13:06:21.030887  136192 buildroot.go:166] provisioning hostname "addons-685870"
	I1213 13:06:21.033915  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.034363  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:21.034398  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.034591  136192 main.go:143] libmachine: Using SSH client type: native
	I1213 13:06:21.034791  136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1213 13:06:21.034803  136192 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-685870 && echo "addons-685870" | sudo tee /etc/hostname
	I1213 13:06:21.170243  136192 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-685870
	
	I1213 13:06:21.172969  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.173334  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:21.173356  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.173506  136192 main.go:143] libmachine: Using SSH client type: native
	I1213 13:06:21.173714  136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1213 13:06:21.173730  136192 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-685870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-685870/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-685870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:06:21.291369  136192 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:06:21.291441  136192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
	I1213 13:06:21.291473  136192 buildroot.go:174] setting up certificates
	I1213 13:06:21.291486  136192 provision.go:84] configureAuth start
	I1213 13:06:21.294597  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.295021  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:21.295065  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.297598  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.298048  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:21.298101  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.298242  136192 provision.go:143] copyHostCerts
	I1213 13:06:21.298336  136192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
	I1213 13:06:21.298476  136192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
	I1213 13:06:21.298542  136192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
	I1213 13:06:21.299514  136192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.addons-685870 san=[127.0.0.1 192.168.39.155 addons-685870 localhost minikube]
	I1213 13:06:21.426641  136192 provision.go:177] copyRemoteCerts
	I1213 13:06:21.426715  136192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:06:21.429502  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.429937  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:21.429967  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.430133  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:21.514447  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:06:21.545060  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 13:06:21.575511  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:06:21.605736  136192 provision.go:87] duration metric: took 314.218832ms to configureAuth
	I1213 13:06:21.605776  136192 buildroot.go:189] setting minikube options for container-runtime
	I1213 13:06:21.606017  136192 config.go:182] Loaded profile config "addons-685870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:06:21.608744  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.609155  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:21.609182  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.609384  136192 main.go:143] libmachine: Using SSH client type: native
	I1213 13:06:21.609619  136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1213 13:06:21.609635  136192 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:06:21.840241  136192 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:06:21.840269  136192 machine.go:97] duration metric: took 919.388709ms to provisionDockerMachine
	I1213 13:06:21.840281  136192 client.go:176] duration metric: took 16.867011394s to LocalClient.Create
	I1213 13:06:21.840299  136192 start.go:167] duration metric: took 16.867065987s to libmachine.API.Create "addons-685870"
	I1213 13:06:21.840306  136192 start.go:293] postStartSetup for "addons-685870" (driver="kvm2")
	I1213 13:06:21.840316  136192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:06:21.840378  136192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:06:21.843187  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.843612  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:21.843641  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.843778  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:21.927997  136192 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:06:21.932971  136192 info.go:137] Remote host: Buildroot 2025.02
	I1213 13:06:21.933010  136192 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
	I1213 13:06:21.933103  136192 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
	I1213 13:06:21.933139  136192 start.go:296] duration metric: took 92.819073ms for postStartSetup
	I1213 13:06:21.936391  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.936899  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:21.936940  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.937321  136192 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/config.json ...
	I1213 13:06:21.937541  136192 start.go:128] duration metric: took 16.966236657s to createHost
	I1213 13:06:21.940010  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.940423  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:21.940447  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:21.940650  136192 main.go:143] libmachine: Using SSH client type: native
	I1213 13:06:21.940889  136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1213 13:06:21.940901  136192 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 13:06:22.051446  136192 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765631182.011013931
	
	I1213 13:06:22.051478  136192 fix.go:216] guest clock: 1765631182.011013931
	I1213 13:06:22.051489  136192 fix.go:229] Guest: 2025-12-13 13:06:22.011013931 +0000 UTC Remote: 2025-12-13 13:06:21.937556264 +0000 UTC m=+17.062827264 (delta=73.457667ms)
	I1213 13:06:22.051516  136192 fix.go:200] guest clock delta is within tolerance: 73.457667ms
	I1213 13:06:22.051521  136192 start.go:83] releasing machines lock for "addons-685870", held for 17.080292802s
	I1213 13:06:22.054463  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:22.054877  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:22.054902  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:22.055470  136192 ssh_runner.go:195] Run: cat /version.json
	I1213 13:06:22.055574  136192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:06:22.058820  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:22.058954  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:22.059370  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:22.059442  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:22.059473  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:22.059499  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:22.059679  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:22.059908  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:22.162944  136192 ssh_runner.go:195] Run: systemctl --version
	I1213 13:06:22.169634  136192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:06:22.333654  136192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:06:22.340664  136192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:06:22.340748  136192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:06:22.360722  136192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:06:22.360762  136192 start.go:496] detecting cgroup driver to use...
	I1213 13:06:22.360854  136192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:06:22.383285  136192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:06:22.400233  136192 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:06:22.400295  136192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:06:22.417838  136192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:06:22.434599  136192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:06:22.582942  136192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:06:22.804266  136192 docker.go:234] disabling docker service ...
	I1213 13:06:22.804339  136192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:06:22.821608  136192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:06:22.837759  136192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:06:23.007854  136192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:06:23.153574  136192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:06:23.171473  136192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:06:23.197940  136192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:06:23.198022  136192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:06:23.211201  136192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 13:06:23.211282  136192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:06:23.225666  136192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:06:23.239067  136192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:06:23.252244  136192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:06:23.265889  136192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:06:23.279755  136192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:06:23.304897  136192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:06:23.320466  136192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:06:23.334089  136192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 13:06:23.334170  136192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 13:06:23.356279  136192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:06:23.371150  136192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:06:23.516804  136192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:06:23.623470  136192 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:06:23.623566  136192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:06:23.629854  136192 start.go:564] Will wait 60s for crictl version
	I1213 13:06:23.629955  136192 ssh_runner.go:195] Run: which crictl
	I1213 13:06:23.634640  136192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 13:06:23.673263  136192 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 13:06:23.673442  136192 ssh_runner.go:195] Run: crio --version
	I1213 13:06:23.704139  136192 ssh_runner.go:195] Run: crio --version
	I1213 13:06:23.736836  136192 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 13:06:23.742052  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:23.742684  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:23.742723  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:23.743009  136192 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 13:06:23.748344  136192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:06:23.764486  136192 kubeadm.go:884] updating cluster {Name:addons-685870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:06:23.764667  136192 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:06:23.764734  136192 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:06:23.795907  136192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1213 13:06:23.795993  136192 ssh_runner.go:195] Run: which lz4
	I1213 13:06:23.801229  136192 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 13:06:23.807154  136192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 13:06:23.807194  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1213 13:06:25.052646  136192 crio.go:462] duration metric: took 1.251454659s to copy over tarball
	I1213 13:06:25.052756  136192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 13:06:26.548011  136192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.495221339s)
	I1213 13:06:26.548043  136192 crio.go:469] duration metric: took 1.495360464s to extract the tarball
	I1213 13:06:26.548056  136192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 13:06:26.584287  136192 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:06:26.623988  136192 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:06:26.624017  136192 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:06:26.624026  136192 kubeadm.go:935] updating node { 192.168.39.155 8443 v1.34.2 crio true true} ...
	I1213 13:06:26.624161  136192 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-685870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:06:26.624242  136192 ssh_runner.go:195] Run: crio config
	I1213 13:06:26.672101  136192 cni.go:84] Creating CNI manager for ""
	I1213 13:06:26.672125  136192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:06:26.672143  136192 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:06:26.672170  136192 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-685870 NodeName:addons-685870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:06:26.672292  136192 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-685870"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.155"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:06:26.672360  136192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:06:26.684860  136192 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:06:26.685007  136192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:06:26.696761  136192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1213 13:06:26.718338  136192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:06:26.738782  136192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1213 13:06:26.762774  136192 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I1213 13:06:26.767692  136192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:06:26.783326  136192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:06:26.927658  136192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:06:26.949320  136192 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870 for IP: 192.168.39.155
	I1213 13:06:26.949350  136192 certs.go:195] generating shared ca certs ...
	I1213 13:06:26.949368  136192 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:26.949543  136192 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
	I1213 13:06:27.020585  136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt ...
	I1213 13:06:27.020620  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt: {Name:mkc6becf2b5f838ac912d42bc6ce0d833d4aff27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.020809  136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key ...
	I1213 13:06:27.020821  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key: {Name:mk210c5828839a72839d87b1daf48c528ece1570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.020906  136192 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
	I1213 13:06:27.055678  136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt ...
	I1213 13:06:27.055709  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt: {Name:mk6ca8839bfaae9762e7287d301b14c26c154a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.055889  136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key ...
	I1213 13:06:27.055902  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key: {Name:mk391bc7627b6c7926cedbd94a6cf416b256163f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.055977  136192 certs.go:257] generating profile certs ...
	I1213 13:06:27.056038  136192 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.key
	I1213 13:06:27.056060  136192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt with IP's: []
	I1213 13:06:27.170089  136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt ...
	I1213 13:06:27.170128  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: {Name:mkcd6a7e733f02f497d31820fd8e522c46801a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.170312  136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.key ...
	I1213 13:06:27.170323  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.key: {Name:mkb49195f8d8cd9ff4872ba3e5202bb1d4127763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.171112  136192 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key.b3eeabe3
	I1213 13:06:27.171136  136192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt.b3eeabe3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.155]
	I1213 13:06:27.260412  136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt.b3eeabe3 ...
	I1213 13:06:27.260448  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt.b3eeabe3: {Name:mkb7f1531d10f1ca11b807c4deeade9593c38873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.260622  136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key.b3eeabe3 ...
	I1213 13:06:27.260636  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key.b3eeabe3: {Name:mk34473adfa4aa41d4f3704f7b241bd13b12328f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.260706  136192 certs.go:382] copying /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt.b3eeabe3 -> /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt
	I1213 13:06:27.260801  136192 certs.go:386] copying /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key.b3eeabe3 -> /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key
	I1213 13:06:27.260858  136192 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.key
	I1213 13:06:27.260879  136192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.crt with IP's: []
	I1213 13:06:27.353419  136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.crt ...
	I1213 13:06:27.353456  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.crt: {Name:mk58b875199fa3fe9d70911d1dcd14e8cb70d824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.353637  136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.key ...
	I1213 13:06:27.353651  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.key: {Name:mk09cf351ddd623415115f8a1cb58bfbf0a0e79e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:27.353830  136192 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:06:27.353877  136192 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:06:27.353907  136192 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:06:27.353931  136192 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
	I1213 13:06:27.354671  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:06:27.387139  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 13:06:27.421827  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:06:27.455062  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 13:06:27.488938  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 13:06:27.521731  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:06:27.553005  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:06:27.583824  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:06:27.615856  136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:06:27.652487  136192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:06:27.678188  136192 ssh_runner.go:195] Run: openssl version
	I1213 13:06:27.685724  136192 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:06:27.698872  136192 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:06:27.713966  136192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:06:27.719658  136192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:06:27.719753  136192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:06:27.727820  136192 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:06:27.740292  136192 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:06:27.752732  136192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:06:27.757673  136192 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:06:27.757737  136192 kubeadm.go:401] StartCluster: {Name:addons-685870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:06:27.757847  136192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:06:27.757907  136192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:06:27.791959  136192 cri.go:89] found id: ""
	I1213 13:06:27.792060  136192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:06:27.806777  136192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:06:27.821202  136192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:06:27.834224  136192 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:06:27.834254  136192 kubeadm.go:158] found existing configuration files:
	
	I1213 13:06:27.834309  136192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:06:27.848208  136192 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:06:27.848296  136192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:06:27.863046  136192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:06:27.876946  136192 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:06:27.877019  136192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:06:27.889876  136192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:06:27.901456  136192 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:06:27.901529  136192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:06:27.914050  136192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:06:27.925250  136192 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:06:27.925324  136192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:06:27.937648  136192 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 13:06:27.987236  136192 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 13:06:27.987355  136192 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:06:28.088459  136192 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:06:28.088591  136192 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:06:28.088745  136192 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:06:28.098588  136192 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:06:28.101492  136192 out.go:252]   - Generating certificates and keys ...
	I1213 13:06:28.102177  136192 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:06:28.102275  136192 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:06:28.337450  136192 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:06:28.508840  136192 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:06:28.738614  136192 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:06:28.833990  136192 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:06:29.215739  136192 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:06:29.215925  136192 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-685870 localhost] and IPs [192.168.39.155 127.0.0.1 ::1]
	I1213 13:06:29.498442  136192 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:06:29.498615  136192 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-685870 localhost] and IPs [192.168.39.155 127.0.0.1 ::1]
	I1213 13:06:29.785065  136192 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:06:29.824816  136192 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:06:29.892652  136192 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:06:29.892783  136192 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:06:30.171653  136192 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:06:30.399034  136192 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:06:30.557776  136192 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:06:30.783252  136192 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:06:31.092971  136192 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:06:31.093467  136192 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:06:31.096606  136192 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:06:31.098383  136192 out.go:252]   - Booting up control plane ...
	I1213 13:06:31.098509  136192 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:06:31.098599  136192 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:06:31.099627  136192 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:06:31.118680  136192 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:06:31.119349  136192 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:06:31.126265  136192 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:06:31.126535  136192 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:06:31.126616  136192 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:06:31.301451  136192 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:06:31.301600  136192 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 13:06:31.802326  136192 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.694057ms
	I1213 13:06:31.805204  136192 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:06:31.805312  136192 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.155:8443/livez
	I1213 13:06:31.805436  136192 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:06:31.805571  136192 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:06:34.495999  136192 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.693587662s
	I1213 13:06:35.809561  136192 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.008098096s
	I1213 13:06:38.300135  136192 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501522641s
	I1213 13:06:38.319096  136192 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:06:38.338510  136192 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:06:38.351406  136192 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:06:38.351615  136192 kubeadm.go:319] [mark-control-plane] Marking the node addons-685870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:06:38.362694  136192 kubeadm.go:319] [bootstrap-token] Using token: 4rz4x4.q7etm0eqh5h03p3i
	I1213 13:06:38.364043  136192 out.go:252]   - Configuring RBAC rules ...
	I1213 13:06:38.364212  136192 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 13:06:38.372994  136192 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 13:06:38.378997  136192 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 13:06:38.384332  136192 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 13:06:38.388394  136192 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 13:06:38.391844  136192 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 13:06:38.709212  136192 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 13:06:39.143173  136192 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 13:06:39.706526  136192 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 13:06:39.709084  136192 kubeadm.go:319] 
	I1213 13:06:39.709142  136192 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 13:06:39.709148  136192 kubeadm.go:319] 
	I1213 13:06:39.709316  136192 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 13:06:39.709344  136192 kubeadm.go:319] 
	I1213 13:06:39.709369  136192 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 13:06:39.709419  136192 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 13:06:39.709517  136192 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 13:06:39.709545  136192 kubeadm.go:319] 
	I1213 13:06:39.709615  136192 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 13:06:39.709625  136192 kubeadm.go:319] 
	I1213 13:06:39.709709  136192 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 13:06:39.709722  136192 kubeadm.go:319] 
	I1213 13:06:39.709785  136192 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 13:06:39.709892  136192 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 13:06:39.709987  136192 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 13:06:39.709998  136192 kubeadm.go:319] 
	I1213 13:06:39.710130  136192 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 13:06:39.710245  136192 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 13:06:39.710256  136192 kubeadm.go:319] 
	I1213 13:06:39.710372  136192 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4rz4x4.q7etm0eqh5h03p3i \
	I1213 13:06:39.710523  136192 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0d7bdf6e2899acb1365169f3e602d91eb327e6d9802bf5e86c346c4733b25f8a \
	I1213 13:06:39.710554  136192 kubeadm.go:319] 	--control-plane 
	I1213 13:06:39.710561  136192 kubeadm.go:319] 
	I1213 13:06:39.710684  136192 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 13:06:39.710693  136192 kubeadm.go:319] 
	I1213 13:06:39.710814  136192 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4rz4x4.q7etm0eqh5h03p3i \
	I1213 13:06:39.711019  136192 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0d7bdf6e2899acb1365169f3e602d91eb327e6d9802bf5e86c346c4733b25f8a 
	I1213 13:06:39.711191  136192 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:06:39.711208  136192 cni.go:84] Creating CNI manager for ""
	I1213 13:06:39.711220  136192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:06:39.712890  136192 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 13:06:39.714055  136192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 13:06:39.726710  136192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 13:06:39.748747  136192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 13:06:39.748843  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:39.748911  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-685870 minikube.k8s.io/updated_at=2025_12_13T13_06_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=addons-685870 minikube.k8s.io/primary=true
	I1213 13:06:39.885985  136192 ops.go:34] apiserver oom_adj: -16
	I1213 13:06:39.886134  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:40.386502  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:40.887012  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:41.386282  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:41.886717  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:42.386565  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:42.886664  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:43.386544  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:43.886849  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:44.387093  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:44.887001  136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:06:45.016916  136192 kubeadm.go:1114] duration metric: took 5.268135557s to wait for elevateKubeSystemPrivileges
	I1213 13:06:45.016958  136192 kubeadm.go:403] duration metric: took 17.259226192s to StartCluster
	I1213 13:06:45.016993  136192 settings.go:142] acquiring lock: {Name:mk721202c5d0c56fb9fb8fa9c13a73c8448f716f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:45.017145  136192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 13:06:45.017555  136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:45.017791  136192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 13:06:45.017828  136192 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:06:45.017874  136192 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 13:06:45.017999  136192 addons.go:70] Setting yakd=true in profile "addons-685870"
	I1213 13:06:45.018013  136192 addons.go:70] Setting inspektor-gadget=true in profile "addons-685870"
	I1213 13:06:45.018022  136192 addons.go:239] Setting addon yakd=true in "addons-685870"
	I1213 13:06:45.018034  136192 addons.go:239] Setting addon inspektor-gadget=true in "addons-685870"
	I1213 13:06:45.018059  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.018059  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.018081  136192 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-685870"
	I1213 13:06:45.018100  136192 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-685870"
	I1213 13:06:45.018103  136192 addons.go:70] Setting registry-creds=true in profile "addons-685870"
	I1213 13:06:45.018113  136192 config.go:182] Loaded profile config "addons-685870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:06:45.018136  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.018139  136192 addons.go:239] Setting addon registry-creds=true in "addons-685870"
	I1213 13:06:45.018125  136192 addons.go:70] Setting ingress=true in profile "addons-685870"
	I1213 13:06:45.018165  136192 addons.go:239] Setting addon ingress=true in "addons-685870"
	I1213 13:06:45.018175  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.018184  136192 addons.go:70] Setting gcp-auth=true in profile "addons-685870"
	I1213 13:06:45.018199  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.018204  136192 mustload.go:66] Loading cluster: addons-685870
	I1213 13:06:45.018380  136192 config.go:182] Loaded profile config "addons-685870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:06:45.018953  136192 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-685870"
	I1213 13:06:45.018979  136192 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-685870"
	I1213 13:06:45.019000  136192 addons.go:70] Setting storage-provisioner=true in profile "addons-685870"
	I1213 13:06:45.019024  136192 addons.go:239] Setting addon storage-provisioner=true in "addons-685870"
	I1213 13:06:45.019049  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.018054  136192 addons.go:70] Setting default-storageclass=true in profile "addons-685870"
	I1213 13:06:45.019099  136192 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-685870"
	I1213 13:06:45.019172  136192 addons.go:70] Setting cloud-spanner=true in profile "addons-685870"
	I1213 13:06:45.019192  136192 addons.go:239] Setting addon cloud-spanner=true in "addons-685870"
	I1213 13:06:45.019220  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.019322  136192 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-685870"
	I1213 13:06:45.019378  136192 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-685870"
	I1213 13:06:45.019401  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.019579  136192 addons.go:70] Setting metrics-server=true in profile "addons-685870"
	I1213 13:06:45.019625  136192 addons.go:239] Setting addon metrics-server=true in "addons-685870"
	I1213 13:06:45.019670  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.019859  136192 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-685870"
	I1213 13:06:45.019898  136192 addons.go:70] Setting registry=true in profile "addons-685870"
	I1213 13:06:45.019903  136192 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-685870"
	I1213 13:06:45.019865  136192 addons.go:70] Setting ingress-dns=true in profile "addons-685870"
	I1213 13:06:45.019937  136192 addons.go:239] Setting addon ingress-dns=true in "addons-685870"
	I1213 13:06:45.019941  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.019963  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.020112  136192 addons.go:70] Setting volumesnapshots=true in profile "addons-685870"
	I1213 13:06:45.020198  136192 addons.go:239] Setting addon volumesnapshots=true in "addons-685870"
	I1213 13:06:45.020236  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.019881  136192 addons.go:70] Setting volcano=true in profile "addons-685870"
	I1213 13:06:45.020331  136192 addons.go:239] Setting addon volcano=true in "addons-685870"
	I1213 13:06:45.020363  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.019917  136192 addons.go:239] Setting addon registry=true in "addons-685870"
	I1213 13:06:45.020408  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.020305  136192 out.go:179] * Verifying Kubernetes components...
	I1213 13:06:45.021855  136192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:06:45.024752  136192 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 13:06:45.024796  136192 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 13:06:45.024819  136192 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 13:06:45.025594  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.028394  136192 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-685870"
	I1213 13:06:45.028435  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.028395  136192 addons.go:239] Setting addon default-storageclass=true in "addons-685870"
	I1213 13:06:45.028841  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:45.028893  136192 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 13:06:45.029445  136192 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 13:06:45.028991  136192 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 13:06:45.029586  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 13:06:45.029792  136192 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 13:06:45.029791  136192 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:06:45.029832  136192 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 13:06:45.029807  136192 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	W1213 13:06:45.029877  136192 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 13:06:45.029940  136192 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 13:06:45.029959  136192 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 13:06:45.030454  136192 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 13:06:45.030462  136192 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 13:06:45.031186  136192 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 13:06:45.031198  136192 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:06:45.031203  136192 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 13:06:45.031221  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:06:45.031244  136192 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 13:06:45.031268  136192 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:06:45.031282  136192 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 13:06:45.031342  136192 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 13:06:45.031908  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 13:06:45.031990  136192 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 13:06:45.032315  136192 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:06:45.032325  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 13:06:45.032328  136192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:06:45.032819  136192 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 13:06:45.032854  136192 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 13:06:45.033143  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 13:06:45.032859  136192 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 13:06:45.033206  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 13:06:45.032871  136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 13:06:45.033297  136192 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 13:06:45.033512  136192 out.go:179]   - Using image docker.io/busybox:stable
	I1213 13:06:45.033538  136192 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 13:06:45.033565  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 13:06:45.034787  136192 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:06:45.034912  136192 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 13:06:45.035708  136192 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 13:06:45.035738  136192 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 13:06:45.035831  136192 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 13:06:45.035846  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 13:06:45.035914  136192 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 13:06:45.035933  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 13:06:45.037009  136192 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 13:06:45.037028  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 13:06:45.037955  136192 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 13:06:45.038533  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.039905  136192 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 13:06:45.040413  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.040453  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.040542  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.040997  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.041493  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.041804  136192 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 13:06:45.042291  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.042326  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.042968  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.043039  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.043088  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.043204  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.043715  136192 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 13:06:45.043988  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.044237  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.044592  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.044853  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.044980  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.044984  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.045370  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.045873  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.045990  136192 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 13:06:45.046169  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.046202  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.046216  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.046595  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.046633  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.046734  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.046802  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.046829  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.046880  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.046931  136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 13:06:45.046947  136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 13:06:45.047189  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.047223  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.047273  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.047294  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.047344  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.047369  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.047520  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.047795  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.047813  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.048000  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.048251  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.048284  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.048673  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.049252  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.049332  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.049530  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.049561  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.049751  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.049785  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.049831  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.050116  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.050286  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.050316  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.050402  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.050447  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.050519  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.050740  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:45.051414  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.051745  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:45.051762  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:45.051888  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	W1213 13:06:45.221676  136192 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58492->192.168.39.155:22: read: connection reset by peer
	I1213 13:06:45.221711  136192 retry.go:31] will retry after 134.119975ms: ssh: handshake failed: read tcp 192.168.39.1:58492->192.168.39.155:22: read: connection reset by peer
	W1213 13:06:45.264693  136192 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58504->192.168.39.155:22: read: connection reset by peer
	I1213 13:06:45.264737  136192 retry.go:31] will retry after 261.947229ms: ssh: handshake failed: read tcp 192.168.39.1:58504->192.168.39.155:22: read: connection reset by peer
	I1213 13:06:45.758712  136192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:06:45.758833  136192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 13:06:45.805615  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 13:06:45.819620  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 13:06:45.820266  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:06:45.915934  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 13:06:45.936689  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:06:45.958694  136192 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 13:06:45.958747  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 13:06:45.973535  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 13:06:46.014641  136192 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 13:06:46.014683  136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 13:06:46.024138  136192 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 13:06:46.024162  136192 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 13:06:46.050040  136192 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 13:06:46.050063  136192 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 13:06:46.070780  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 13:06:46.143955  136192 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 13:06:46.143988  136192 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 13:06:46.149340  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 13:06:46.230803  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 13:06:46.411602  136192 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 13:06:46.411640  136192 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 13:06:46.414042  136192 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 13:06:46.414105  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 13:06:46.419595  136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 13:06:46.419613  136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 13:06:46.428373  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 13:06:46.446451  136192 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 13:06:46.446475  136192 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 13:06:46.502080  136192 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 13:06:46.502110  136192 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 13:06:46.645636  136192 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 13:06:46.645664  136192 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 13:06:46.684517  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 13:06:46.691013  136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 13:06:46.691100  136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 13:06:46.736244  136192 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 13:06:46.736281  136192 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 13:06:46.771104  136192 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 13:06:46.771130  136192 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 13:06:46.889934  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 13:06:46.966444  136192 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 13:06:46.966479  136192 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 13:06:46.969563  136192 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 13:06:46.969582  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 13:06:46.981367  136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 13:06:46.981390  136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 13:06:47.286285  136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 13:06:47.286316  136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 13:06:47.288360  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 13:06:47.311879  136192 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:06:47.311905  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 13:06:47.581669  136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 13:06:47.581697  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 13:06:47.657485  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:06:48.072591  136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 13:06:48.072620  136192 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 13:06:48.187153  136192 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.42839983s)
	I1213 13:06:48.187220  136192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.428356041s)
	I1213 13:06:48.187243  136192 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 13:06:48.188893  136192 node_ready.go:35] waiting up to 6m0s for node "addons-685870" to be "Ready" ...
	I1213 13:06:48.197434  136192 node_ready.go:49] node "addons-685870" is "Ready"
	I1213 13:06:48.197457  136192 node_ready.go:38] duration metric: took 8.514158ms for node "addons-685870" to be "Ready" ...
	I1213 13:06:48.197468  136192 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:06:48.197510  136192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:06:48.604232  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.798572172s)
	I1213 13:06:48.604353  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.784052323s)
	I1213 13:06:48.604387  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.784720127s)
	I1213 13:06:48.607316  136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 13:06:48.607342  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 13:06:48.693986  136192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-685870" context rescaled to 1 replicas
	I1213 13:06:48.825808  136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 13:06:48.825839  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 13:06:49.190080  136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 13:06:49.190112  136192 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 13:06:49.301996  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 13:06:49.990916  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.074938605s)
	I1213 13:06:49.990999  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.054268788s)
	I1213 13:06:50.926969  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.856137833s)
	I1213 13:06:50.927178  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.953603595s)
	I1213 13:06:51.132232  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.982844342s)
	I1213 13:06:51.132280  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.901438552s)
	I1213 13:06:52.479567  136192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 13:06:52.482697  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:52.483179  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:52.483218  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:52.483410  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:52.640648  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.212236395s)
	I1213 13:06:52.640685  136192 addons.go:495] Verifying addon ingress=true in "addons-685870"
	I1213 13:06:52.640772  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.956212932s)
	I1213 13:06:52.640857  136192 addons.go:495] Verifying addon registry=true in "addons-685870"
	I1213 13:06:52.640866  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.750879656s)
	I1213 13:06:52.640885  136192 addons.go:495] Verifying addon metrics-server=true in "addons-685870"
	I1213 13:06:52.640958  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.352557093s)
	I1213 13:06:52.642099  136192 out.go:179] * Verifying ingress addon...
	I1213 13:06:52.642104  136192 out.go:179] * Verifying registry addon...
	I1213 13:06:52.642641  136192 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-685870 service yakd-dashboard -n yakd-dashboard
	
	I1213 13:06:52.644056  136192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 13:06:52.644237  136192 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 13:06:52.715136  136192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 13:06:52.788062  136192 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 13:06:52.788099  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:52.788067  136192 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 13:06:52.788122  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:52.795039  136192 addons.go:239] Setting addon gcp-auth=true in "addons-685870"
	I1213 13:06:52.795101  136192 host.go:66] Checking if "addons-685870" exists ...
	I1213 13:06:52.796787  136192 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 13:06:52.799278  136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:52.799786  136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
	I1213 13:06:52.799831  136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
	I1213 13:06:52.800028  136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
	I1213 13:06:52.988279  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.330743397s)
	I1213 13:06:52.988302  136192 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.790778977s)
	I1213 13:06:52.988326  136192 api_server.go:72] duration metric: took 7.970460889s to wait for apiserver process to appear ...
	I1213 13:06:52.988333  136192 api_server.go:88] waiting for apiserver healthz status ...
	W1213 13:06:52.988328  136192 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 13:06:52.988354  136192 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1213 13:06:52.988357  136192 retry.go:31] will retry after 190.807387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 13:06:53.020180  136192 api_server.go:279] https://192.168.39.155:8443/healthz returned 200:
	ok
	I1213 13:06:53.040633  136192 api_server.go:141] control plane version: v1.34.2
	I1213 13:06:53.040683  136192 api_server.go:131] duration metric: took 52.340104ms to wait for apiserver health ...
	I1213 13:06:53.040699  136192 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:06:53.098834  136192 system_pods.go:59] 17 kube-system pods found
	I1213 13:06:53.098894  136192 system_pods.go:61] "amd-gpu-device-plugin-sl2f8" [10079e75-52ad-4ae2-97a6-8c8a76f8cd2e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:53.098909  136192 system_pods.go:61] "coredns-66bc5c9577-277fw" [12d631bf-ed9a-438c-8e6b-7d606f1c5363] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:53.098924  136192 system_pods.go:61] "coredns-66bc5c9577-ztskd" [5b623913-ee74-409f-a7bc-5fda744c8583] Running
	I1213 13:06:53.098934  136192 system_pods.go:61] "etcd-addons-685870" [381c1c3c-6e27-4f86-b43d-455f0cd88783] Running
	I1213 13:06:53.098940  136192 system_pods.go:61] "kube-apiserver-addons-685870" [16689b99-d74a-4b25-820e-4975dbaa96bc] Running
	I1213 13:06:53.098949  136192 system_pods.go:61] "kube-controller-manager-addons-685870" [13f55bbd-6f0f-4c13-a401-bd5d4719d5f6] Running
	I1213 13:06:53.098957  136192 system_pods.go:61] "kube-ingress-dns-minikube" [c7214f42-abd1-4b9a-a6a5-431cff38e423] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:53.098969  136192 system_pods.go:61] "kube-proxy-hlmj5" [431fd1a1-aa6e-4095-af35-72087499f30a] Running
	I1213 13:06:53.098979  136192 system_pods.go:61] "kube-scheduler-addons-685870" [87faddcd-32df-403f-a0f8-7e9b8370940c] Running
	I1213 13:06:53.098989  136192 system_pods.go:61] "metrics-server-85b7d694d7-xqtfb" [2329f277-682f-41d0-9879-ac4768581afd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:53.099005  136192 system_pods.go:61] "nvidia-device-plugin-daemonset-k6r7t" [fabec6f5-3861-4173-b733-8b09a8eeddfa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:53.099016  136192 system_pods.go:61] "registry-6b586f9694-4xd6c" [42f338ba-b090-4f81-ad48-bcb9795e19cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:53.099025  136192 system_pods.go:61] "registry-creds-764b6fb674-lmxzj" [eb2685cd-b67a-4045-a9fd-f3e2480fd2b7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:53.099035  136192 system_pods.go:61] "registry-proxy-ww99f" [b233ab84-669c-4f80-a75e-051ffeafc9b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:53.099042  136192 system_pods.go:61] "snapshot-controller-7d9fbc56b8-68skh" [b3e3630b-02ff-49ab-a56a-554ddddfc5e9] Pending
	I1213 13:06:53.099048  136192 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sxjgh" [6feb5975-075d-4182-acce-5e1e857e5709] Pending
	I1213 13:06:53.099058  136192 system_pods.go:61] "storage-provisioner" [bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:53.099068  136192 system_pods.go:74] duration metric: took 58.359055ms to wait for pod list to return data ...
	I1213 13:06:53.099116  136192 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:06:53.166025  136192 default_sa.go:45] found service account: "default"
	I1213 13:06:53.166054  136192 default_sa.go:55] duration metric: took 66.931268ms for default service account to be created ...
	I1213 13:06:53.166084  136192 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:06:53.179637  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:06:53.196486  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:53.196701  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:53.197178  136192 system_pods.go:86] 17 kube-system pods found
	I1213 13:06:53.197216  136192 system_pods.go:89] "amd-gpu-device-plugin-sl2f8" [10079e75-52ad-4ae2-97a6-8c8a76f8cd2e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:53.197232  136192 system_pods.go:89] "coredns-66bc5c9577-277fw" [12d631bf-ed9a-438c-8e6b-7d606f1c5363] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:53.197243  136192 system_pods.go:89] "coredns-66bc5c9577-ztskd" [5b623913-ee74-409f-a7bc-5fda744c8583] Running
	I1213 13:06:53.197252  136192 system_pods.go:89] "etcd-addons-685870" [381c1c3c-6e27-4f86-b43d-455f0cd88783] Running
	I1213 13:06:53.197258  136192 system_pods.go:89] "kube-apiserver-addons-685870" [16689b99-d74a-4b25-820e-4975dbaa96bc] Running
	I1213 13:06:53.197264  136192 system_pods.go:89] "kube-controller-manager-addons-685870" [13f55bbd-6f0f-4c13-a401-bd5d4719d5f6] Running
	I1213 13:06:53.197272  136192 system_pods.go:89] "kube-ingress-dns-minikube" [c7214f42-abd1-4b9a-a6a5-431cff38e423] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:53.197277  136192 system_pods.go:89] "kube-proxy-hlmj5" [431fd1a1-aa6e-4095-af35-72087499f30a] Running
	I1213 13:06:53.197284  136192 system_pods.go:89] "kube-scheduler-addons-685870" [87faddcd-32df-403f-a0f8-7e9b8370940c] Running
	I1213 13:06:53.197298  136192 system_pods.go:89] "metrics-server-85b7d694d7-xqtfb" [2329f277-682f-41d0-9879-ac4768581afd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:53.197310  136192 system_pods.go:89] "nvidia-device-plugin-daemonset-k6r7t" [fabec6f5-3861-4173-b733-8b09a8eeddfa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:53.197322  136192 system_pods.go:89] "registry-6b586f9694-4xd6c" [42f338ba-b090-4f81-ad48-bcb9795e19cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:53.197333  136192 system_pods.go:89] "registry-creds-764b6fb674-lmxzj" [eb2685cd-b67a-4045-a9fd-f3e2480fd2b7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:53.197344  136192 system_pods.go:89] "registry-proxy-ww99f" [b233ab84-669c-4f80-a75e-051ffeafc9b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:53.197352  136192 system_pods.go:89] "snapshot-controller-7d9fbc56b8-68skh" [b3e3630b-02ff-49ab-a56a-554ddddfc5e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:53.197360  136192 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sxjgh" [6feb5975-075d-4182-acce-5e1e857e5709] Pending
	I1213 13:06:53.197368  136192 system_pods.go:89] "storage-provisioner" [bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:53.197379  136192 system_pods.go:126] duration metric: took 31.28651ms to wait for k8s-apps to be running ...
	I1213 13:06:53.197394  136192 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:06:53.197454  136192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:06:53.663675  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:53.663686  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:54.001530  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.699488024s)
	I1213 13:06:54.001573  136192 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-685870"
	I1213 13:06:54.001583  136192 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.204769167s)
	I1213 13:06:54.003535  136192 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:06:54.003553  136192 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 13:06:54.004867  136192 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 13:06:54.005571  136192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 13:06:54.005828  136192 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 13:06:54.005845  136192 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 13:06:54.024316  136192 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 13:06:54.024337  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:54.156149  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:54.158441  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:54.162873  136192 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 13:06:54.162898  136192 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 13:06:54.222339  136192 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 13:06:54.222361  136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 13:06:54.304583  136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 13:06:54.511497  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:54.647865  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:54.649801  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:55.011110  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:55.014887  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.835182604s)
	I1213 13:06:55.014919  136192 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.817441745s)
	I1213 13:06:55.014947  136192 system_svc.go:56] duration metric: took 1.817548307s WaitForService to wait for kubelet
	I1213 13:06:55.014960  136192 kubeadm.go:587] duration metric: took 9.997092185s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:06:55.015009  136192 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:06:55.020542  136192 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 13:06:55.020565  136192 node_conditions.go:123] node cpu capacity is 2
	I1213 13:06:55.020583  136192 node_conditions.go:105] duration metric: took 5.563532ms to run NodePressure ...
	I1213 13:06:55.020599  136192 start.go:242] waiting for startup goroutines ...
	I1213 13:06:55.151301  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:55.151891  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:55.343942  136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.039307008s)
	I1213 13:06:55.344896  136192 addons.go:495] Verifying addon gcp-auth=true in "addons-685870"
	I1213 13:06:55.346425  136192 out.go:179] * Verifying gcp-auth addon...
	I1213 13:06:55.348331  136192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 13:06:55.358692  136192 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 13:06:55.358714  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:55.537275  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:55.650226  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:55.651356  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:55.852507  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:56.010170  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:56.151227  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:56.151397  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:56.353046  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:56.510296  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:56.647887  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:56.647956  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:56.851861  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:57.012745  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:57.157190  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:57.157190  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:57.354696  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:57.509661  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:57.653170  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:57.653611  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:57.853007  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:58.010553  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:58.150665  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:58.151615  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:58.351526  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:58.509705  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:58.649939  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:58.650304  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:58.852219  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:59.009742  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:59.148669  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:59.148712  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:59.352544  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:59.509773  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:59.648661  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:59.649456  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:59.851343  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:00.009117  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:00.148054  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:00.148390  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:00.353128  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:00.511104  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:00.651233  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:00.654187  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:00.852443  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:01.012029  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:01.149638  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:01.150577  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:01.353218  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:01.511461  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:01.648358  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:01.648448  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:01.851510  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:02.010294  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:02.148115  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:02.148537  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:02.352293  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:02.509943  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:02.648710  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:02.649331  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:02.851543  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:03.010187  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:03.149465  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:03.149587  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:03.352095  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:03.509740  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:03.648635  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:03.648791  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:03.851844  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:04.010731  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:04.148099  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:04.148635  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:04.355407  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:04.510244  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:04.651223  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:04.651865  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:04.854421  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:05.011494  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:05.149950  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:05.150701  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:05.353134  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:05.509741  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:05.650013  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:05.650817  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:05.852058  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:06.009626  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:06.150926  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:06.153517  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:06.352055  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:06.509965  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:06.647981  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:06.647975  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:06.852629  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:07.009189  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:07.151817  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:07.152347  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:07.351527  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:07.518865  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:07.676424  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:07.677201  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:07.852928  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:08.010863  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:08.150405  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:08.150484  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:08.352450  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:08.509904  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:08.647826  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:08.648158  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:08.852398  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:09.009602  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:09.147221  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:09.147607  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:09.352388  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:09.509723  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:09.647738  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:09.649217  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:09.853585  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:10.009622  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:10.148005  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:10.148702  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:10.351997  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:10.511342  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:10.648991  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:10.649657  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:10.852484  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:11.012328  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:11.147835  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:11.149835  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:11.354616  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:11.508798  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:11.652098  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:11.652798  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:11.854212  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:12.013648  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:12.149883  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:12.150277  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:12.351104  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:12.511725  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:12.649542  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:12.650421  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:12.852539  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:13.009491  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:13.154764  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:13.155021  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:13.353522  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:13.511028  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:13.647556  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:13.648257  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:13.857204  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:14.012599  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:14.149303  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:14.151010  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:14.357164  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:14.510149  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:14.651082  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:14.651690  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:14.854129  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:15.010864  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:15.147004  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:15.147408  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:15.351887  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:15.512484  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:15.651253  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:15.651568  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:15.984739  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:16.011025  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:16.151286  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:16.154515  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:16.354988  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:16.509242  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:16.649902  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:16.651455  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:16.851571  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:17.009766  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:17.148613  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:17.148792  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:17.351990  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:17.510458  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:17.649631  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:17.649714  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:17.852531  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:18.009715  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:18.149246  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:18.150564  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:18.352041  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:18.514711  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:18.649881  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:18.650286  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:18.852498  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:19.009608  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:19.147855  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:19.148166  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:19.351966  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:19.509274  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:19.648563  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:19.648796  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:19.851528  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:20.009420  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:20.151172  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:20.151981  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:20.352396  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:20.513697  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:20.649425  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:20.650104  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:20.852403  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:21.011488  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:21.150471  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:21.151164  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:21.358179  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:21.510023  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:21.655217  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:21.656432  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:21.852499  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:22.013949  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:22.154867  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:22.156430  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:22.353059  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:22.510482  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:22.656467  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:22.656492  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:22.851338  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:23.012595  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:23.150321  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:23.153429  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:23.352322  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:23.511107  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:23.649478  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:23.649628  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:23.851186  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:24.009591  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:24.149058  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:24.149191  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:24.353595  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:24.508855  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:24.648282  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:24.649084  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:24.853158  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:25.010985  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:25.151588  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:25.154023  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:25.352148  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:25.509588  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:25.658096  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:25.658421  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:25.853451  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:26.010305  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:26.150985  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:26.152091  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:26.352358  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:26.513597  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:26.649842  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:26.651346  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:26.852790  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:27.008857  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:27.148981  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:27.149564  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:27.398515  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:27.510403  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:27.649742  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:27.651064  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:27.853304  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:28.011360  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:28.155006  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:07:28.155030  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:28.354104  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:28.510052  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:28.648839  136192 kapi.go:107] duration metric: took 36.004779434s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 13:07:28.649980  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:28.852182  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:29.009724  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:29.148338  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:29.352524  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:29.508656  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:29.650898  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:29.853513  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:30.008794  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:30.147594  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:30.352559  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:30.508661  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:30.647670  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:30.851784  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:31.010170  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:31.153251  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:31.351209  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:31.511100  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:31.648182  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:31.855390  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:32.010547  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:32.149398  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:32.353164  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:32.510474  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:32.649868  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:32.851933  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:33.012855  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:33.151049  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:33.463863  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:33.513615  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:33.648792  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:33.853970  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:34.010395  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:34.148494  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:34.352056  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:34.510025  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:34.648176  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:34.853103  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:35.009840  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:35.148010  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:35.352402  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:35.510657  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:35.648806  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:35.852331  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:36.010692  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:36.155458  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:36.353691  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:36.509757  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:36.650065  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:36.871954  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:37.012342  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:37.148540  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:37.351249  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:37.509713  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:37.648425  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:37.852840  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:38.009419  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:38.152399  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:38.351801  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:38.514383  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:38.648994  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:38.854541  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:39.012033  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:39.155835  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:39.354084  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:39.510435  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:39.651527  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:39.853464  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:40.012628  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:40.151181  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:40.354051  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:40.511683  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:40.650235  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:40.856471  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:41.011513  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:41.150258  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:41.353389  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:41.514415  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:41.649250  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:41.853343  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:42.012469  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:42.148354  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:42.353771  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:42.510488  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:42.648051  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:42.854214  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:43.014509  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:43.151363  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:43.354238  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:43.521742  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:43.648551  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:43.855215  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:44.012880  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:44.150380  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:44.354838  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:44.510034  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:44.653354  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:44.854230  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:45.010383  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:45.153582  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:45.364491  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:45.511344  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:45.648328  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:45.867233  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:46.012521  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:46.149137  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:46.354891  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:46.510696  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:46.656491  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:46.853675  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:47.019658  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:47.152860  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:47.353033  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:47.509805  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:47.649046  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:47.856557  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:48.010442  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:48.384429  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:48.384540  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:48.514197  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:48.648939  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:48.854594  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:49.011044  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:49.150188  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:49.353056  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:49.510849  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:49.650318  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:49.851031  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:50.011011  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:50.163965  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:50.353383  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:50.515330  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:50.652424  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:50.853827  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:51.012138  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:51.149280  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:51.353874  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:51.509633  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:51.648870  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:51.855016  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:52.011520  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:52.149658  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:52.355005  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:52.515616  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:52.650348  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:52.852031  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:53.011962  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:53.149203  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:53.352541  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:53.509623  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:53.654488  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:53.852567  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:54.010197  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:54.151792  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:54.354337  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:54.513460  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:54.649722  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:54.852193  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:55.015541  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:55.149988  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:55.352683  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:55.510315  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:55.649065  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:55.853703  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:56.008751  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:56.149641  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:56.354590  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:56.511389  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:56.653820  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:56.854397  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:57.009805  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:57.152817  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:57.354685  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:57.509293  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:57.648877  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:57.855009  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:58.014001  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:07:58.154946  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:58.359163  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:58.514124  136192 kapi.go:107] duration metric: took 1m4.508546084s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 13:07:58.652029  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:58.860114  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:59.150386  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:59.352308  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:07:59.649206  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:07:59.854278  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:00.203179  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:00.357204  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:00.650279  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:00.854683  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:01.148993  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:01.355169  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:01.648419  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:01.852261  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:02.147782  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:02.352595  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:02.651457  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:02.853402  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:03.148572  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:03.353393  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:03.648809  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:03.855134  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:04.149820  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:04.353128  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:04.651601  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:04.852434  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:05.148456  136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:08:05.351834  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:05.651340  136192 kapi.go:107] duration metric: took 1m13.00710089s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 13:08:05.851597  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:06.352640  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:06.852551  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:07.353244  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:07.853800  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:08.354923  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:08.852868  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:09.352601  136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:08:09.852375  136192 kapi.go:107] duration metric: took 1m14.504042081s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 13:08:09.854004  136192 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-685870 cluster.
	I1213 13:08:09.855062  136192 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 13:08:09.856107  136192 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 13:08:09.857205  136192 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, default-storageclass, ingress-dns, storage-provisioner, registry-creds, inspektor-gadget, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1213 13:08:09.858576  136192 addons.go:530] duration metric: took 1m24.840708372s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin default-storageclass ingress-dns storage-provisioner registry-creds inspektor-gadget nvidia-device-plugin storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1213 13:08:09.858623  136192 start.go:247] waiting for cluster config update ...
	I1213 13:08:09.858648  136192 start.go:256] writing updated cluster config ...
	I1213 13:08:09.858966  136192 ssh_runner.go:195] Run: rm -f paused
	I1213 13:08:09.864114  136192 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:08:09.867207  136192 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ztskd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:09.872181  136192 pod_ready.go:94] pod "coredns-66bc5c9577-ztskd" is "Ready"
	I1213 13:08:09.872212  136192 pod_ready.go:86] duration metric: took 4.984923ms for pod "coredns-66bc5c9577-ztskd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:09.874516  136192 pod_ready.go:83] waiting for pod "etcd-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:09.877800  136192 pod_ready.go:94] pod "etcd-addons-685870" is "Ready"
	I1213 13:08:09.877829  136192 pod_ready.go:86] duration metric: took 3.29268ms for pod "etcd-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:09.879977  136192 pod_ready.go:83] waiting for pod "kube-apiserver-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:09.883832  136192 pod_ready.go:94] pod "kube-apiserver-addons-685870" is "Ready"
	I1213 13:08:09.883857  136192 pod_ready.go:86] duration metric: took 3.859ms for pod "kube-apiserver-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:09.885621  136192 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:10.267700  136192 pod_ready.go:94] pod "kube-controller-manager-addons-685870" is "Ready"
	I1213 13:08:10.267746  136192 pod_ready.go:86] duration metric: took 382.106967ms for pod "kube-controller-manager-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:10.467891  136192 pod_ready.go:83] waiting for pod "kube-proxy-hlmj5" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:10.868424  136192 pod_ready.go:94] pod "kube-proxy-hlmj5" is "Ready"
	I1213 13:08:10.868464  136192 pod_ready.go:86] duration metric: took 400.533636ms for pod "kube-proxy-hlmj5" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:11.068552  136192 pod_ready.go:83] waiting for pod "kube-scheduler-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:11.468278  136192 pod_ready.go:94] pod "kube-scheduler-addons-685870" is "Ready"
	I1213 13:08:11.468318  136192 pod_ready.go:86] duration metric: took 399.732643ms for pod "kube-scheduler-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:08:11.468336  136192 pod_ready.go:40] duration metric: took 1.604195099s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:08:11.517399  136192 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:08:11.519022  136192 out.go:179] * Done! kubectl is now configured to use "addons-685870" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.110510307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe479ff2-c705-4dea-bb47-81709990b04e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.110933798Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe479ff2-c705-4dea-bb47-81709990b04e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.111302610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6753fd85fcd829c813398ffdda13c27e0224e5ae8b0c212ea8c363b3d2555de8,PodSandboxId:0a7c55d23ca61b7aa4a13730e1600d23288bb5f6c21f98b36fb9f5efba23c869,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765631324333940208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9264c705-e985-4103-9edc-eaa92549670d,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12ed54bc37e8a810b2bcf11d2d5520536632e7ceef8b58e4e00ed0e45a1d793,PodSandboxId:a6a14dca33627c6c3a76de9eef91be13cef2750d4410e39091bf3c11dea67042,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765631296208329080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28996d9e-2b5f-4e3c-b142-b2a3308dd12c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1cdcb8f278f0b0ac210e671a5109dbe3dbf17cb852b675fca79a8e4d650ea7,PodSandboxId:c613d524d2eda396777a0be2c90ef5cec072a50f9c114d5f737863b9d4f0c230,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765631284681922883,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnvpr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba3cb1ed-ba8f-4a57-9ebc-48e9b1ca789c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecee2b5c3135febd56e2581fe7ce863b76c7becb1b2d6149bf37854b7b6c86a9,PodSandboxId:7e1023281f49143b11f685483f4f2992aa0ab69e0fc2d5a80c6695b5087209cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263481446227,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-df6ws,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e937ef53-6752-4fcf-b14d-7fc9c61e2822,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f69be2a0c8e8058841bcd906baff248a5298620bd9ad010dc18e106e3efb,PodSandboxId:8b98c26ee365e368565e4e9000251a027fdd0dad66543d8b5f56c8b994794ff1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263354804192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fj4wb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0daeae6-7a6e-4329-b7af-d85916aa1733,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c79972183e309ffabfb8539ffd6369672b8fe80156f4b37324c71183efe2377,PodSandboxId:084b3e2e81bd52415040bf457b5d939ecf54f9a31d55b29510a6a96ee98e7187,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765631237512834278,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7214f42-abd1-4b9a-a6a5-431cff38e423,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd3cedf932f2846fc9e7438c16ed4535e6a732725e690627fd53090376f70a1,PodSandboxId:2c84a2744aa6caedca2a609b6fb8b82940155c69a4493023410cb07c9c55d58e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765631221292006391,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl2f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10079e75-52ad-4ae2-97a6-8c8a76f8cd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da,PodSandboxId:86d0f1c9640695acd2f02a327a38911e1a52146632e001a91dda0c909a04784f,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765631212020303563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737,PodSandboxId:522a168346f8b9daebb875e37a2312f79a93b92c1d1ed2e8eda4b3bff11a258f,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765631205923967488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b623913-ee74-409f-a7bc-5fda744c8583,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd,PodSandboxId:faabe943f3b6cbc80c3fa80de4eed0665e1b012378e8819d103befad959ba547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765631204355252376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431fd1a1-aa6e-4095-af35-72087499f30a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697,PodSandboxId:588ae9df297adf54f1470c4ccab2809b0280469689bbcb95fa0b580a116340be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765631192805207604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ae31683d0b65ea196103472695e50ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0,PodSandboxId:f776546868eaf2ee8a108614fa2cade081056bc694d87b89fb3ab75090c698d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765631192824698744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40f59c68bc1ab457c8ac3efb96ad62a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1,PodSandboxId:49619928c3b2b45beae713de444e4195b9a2b72d26de5b60ece9ac629088dab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765631192787344560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d07133aa7ef6081dfb5c
33d1096c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6,PodSandboxId:5a3edbbcfca145fdfcf8c1ed964191cfaeb4e723ff86e644b452724a8b51b386,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765631192776222969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b4ccda251d6114c3d01a6ec894549d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe479ff2-c705-4dea-bb47-81709990b04e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.136668689Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.160014970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8780b81a-5135-40c9-bf6f-9de1c3e79542 name=/runtime.v1.RuntimeService/Version
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.160215589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8780b81a-5135-40c9-bf6f-9de1c3e79542 name=/runtime.v1.RuntimeService/Version
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.161896867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb72d533-c0c1-4f26-973c-b7c533873bee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.163339853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765631466163304111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb72d533-c0c1-4f26-973c-b7c533873bee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.164343841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20b28bab-1ba4-4c96-84fe-0fb9cc8186d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.164423141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20b28bab-1ba4-4c96-84fe-0fb9cc8186d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.164841568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6753fd85fcd829c813398ffdda13c27e0224e5ae8b0c212ea8c363b3d2555de8,PodSandboxId:0a7c55d23ca61b7aa4a13730e1600d23288bb5f6c21f98b36fb9f5efba23c869,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765631324333940208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9264c705-e985-4103-9edc-eaa92549670d,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12ed54bc37e8a810b2bcf11d2d5520536632e7ceef8b58e4e00ed0e45a1d793,PodSandboxId:a6a14dca33627c6c3a76de9eef91be13cef2750d4410e39091bf3c11dea67042,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765631296208329080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28996d9e-2b5f-4e3c-b142-b2a3308dd12c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1cdcb8f278f0b0ac210e671a5109dbe3dbf17cb852b675fca79a8e4d650ea7,PodSandboxId:c613d524d2eda396777a0be2c90ef5cec072a50f9c114d5f737863b9d4f0c230,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765631284681922883,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnvpr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba3cb1ed-ba8f-4a57-9ebc-48e9b1ca789c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecee2b5c3135febd56e2581fe7ce863b76c7becb1b2d6149bf37854b7b6c86a9,PodSandboxId:7e1023281f49143b11f685483f4f2992aa0ab69e0fc2d5a80c6695b5087209cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263481446227,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-df6ws,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e937ef53-6752-4fcf-b14d-7fc9c61e2822,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f69be2a0c8e8058841bcd906baff248a5298620bd9ad010dc18e106e3efb,PodSandboxId:8b98c26ee365e368565e4e9000251a027fdd0dad66543d8b5f56c8b994794ff1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263354804192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fj4wb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0daeae6-7a6e-4329-b7af-d85916aa1733,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c79972183e309ffabfb8539ffd6369672b8fe80156f4b37324c71183efe2377,PodSandboxId:084b3e2e81bd52415040bf457b5d939ecf54f9a31d55b29510a6a96ee98e7187,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765631237512834278,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7214f42-abd1-4b9a-a6a5-431cff38e423,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd3cedf932f2846fc9e7438c16ed4535e6a732725e690627fd53090376f70a1,PodSandboxId:2c84a2744aa6caedca2a609b6fb8b82940155c69a4493023410cb07c9c55d58e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765631221292006391,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl2f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10079e75-52ad-4ae2-97a6-8c8a76f8cd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da,PodSandboxId:86d0f1c9640695acd2f02a327a38911e1a52146632e001a91dda0c909a04784f,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765631212020303563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737,PodSandboxId:522a168346f8b9daebb875e37a2312f79a93b92c1d1ed2e8eda4b3bff11a258f,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765631205923967488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b623913-ee74-409f-a7bc-5fda744c8583,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd,PodSandboxId:faabe943f3b6cbc80c3fa80de4eed0665e1b012378e8819d103befad959ba547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765631204355252376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431fd1a1-aa6e-4095-af35-72087499f30a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697,PodSandboxId:588ae9df297adf54f1470c4ccab2809b0280469689bbcb95fa0b580a116340be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765631192805207604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ae31683d0b65ea196103472695e50ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0,PodSandboxId:f776546868eaf2ee8a108614fa2cade081056bc694d87b89fb3ab75090c698d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765631192824698744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40f59c68bc1ab457c8ac3efb96ad62a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1,PodSandboxId:49619928c3b2b45beae713de444e4195b9a2b72d26de5b60ece9ac629088dab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765631192787344560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d07133aa7ef6081dfb5c
33d1096c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6,PodSandboxId:5a3edbbcfca145fdfcf8c1ed964191cfaeb4e723ff86e644b452724a8b51b386,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765631192776222969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b4ccda251d6114c3d01a6ec894549d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20b28bab-1ba4-4c96-84fe-0fb9cc8186d1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.198542296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff511621-2318-4645-b6b9-9f2086acb6f0 name=/runtime.v1.RuntimeService/Version
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.198909538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff511621-2318-4645-b6b9-9f2086acb6f0 name=/runtime.v1.RuntimeService/Version
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.200880419Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=521dd6f3-8790-458b-91b1-e728bde74085 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.202618565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765631466202584553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=521dd6f3-8790-458b-91b1-e728bde74085 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.203695814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77dae93d-abb9-4d2b-9acf-35d24bde4375 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.203985440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77dae93d-abb9-4d2b-9acf-35d24bde4375 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.204322782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6753fd85fcd829c813398ffdda13c27e0224e5ae8b0c212ea8c363b3d2555de8,PodSandboxId:0a7c55d23ca61b7aa4a13730e1600d23288bb5f6c21f98b36fb9f5efba23c869,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765631324333940208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9264c705-e985-4103-9edc-eaa92549670d,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12ed54bc37e8a810b2bcf11d2d5520536632e7ceef8b58e4e00ed0e45a1d793,PodSandboxId:a6a14dca33627c6c3a76de9eef91be13cef2750d4410e39091bf3c11dea67042,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765631296208329080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28996d9e-2b5f-4e3c-b142-b2a3308dd12c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1cdcb8f278f0b0ac210e671a5109dbe3dbf17cb852b675fca79a8e4d650ea7,PodSandboxId:c613d524d2eda396777a0be2c90ef5cec072a50f9c114d5f737863b9d4f0c230,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765631284681922883,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnvpr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba3cb1ed-ba8f-4a57-9ebc-48e9b1ca789c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecee2b5c3135febd56e2581fe7ce863b76c7becb1b2d6149bf37854b7b6c86a9,PodSandboxId:7e1023281f49143b11f685483f4f2992aa0ab69e0fc2d5a80c6695b5087209cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263481446227,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-df6ws,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e937ef53-6752-4fcf-b14d-7fc9c61e2822,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f69be2a0c8e8058841bcd906baff248a5298620bd9ad010dc18e106e3efb,PodSandboxId:8b98c26ee365e368565e4e9000251a027fdd0dad66543d8b5f56c8b994794ff1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263354804192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fj4wb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0daeae6-7a6e-4329-b7af-d85916aa1733,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c79972183e309ffabfb8539ffd6369672b8fe80156f4b37324c71183efe2377,PodSandboxId:084b3e2e81bd52415040bf457b5d939ecf54f9a31d55b29510a6a96ee98e7187,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765631237512834278,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7214f42-abd1-4b9a-a6a5-431cff38e423,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd3cedf932f2846fc9e7438c16ed4535e6a732725e690627fd53090376f70a1,PodSandboxId:2c84a2744aa6caedca2a609b6fb8b82940155c69a4493023410cb07c9c55d58e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765631221292006391,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl2f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10079e75-52ad-4ae2-97a6-8c8a76f8cd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da,PodSandboxId:86d0f1c9640695acd2f02a327a38911e1a52146632e001a91dda0c909a04784f,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765631212020303563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737,PodSandboxId:522a168346f8b9daebb875e37a2312f79a93b92c1d1ed2e8eda4b3bff11a258f,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765631205923967488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b623913-ee74-409f-a7bc-5fda744c8583,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd,PodSandboxId:faabe943f3b6cbc80c3fa80de4eed0665e1b012378e8819d103befad959ba547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765631204355252376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431fd1a1-aa6e-4095-af35-72087499f30a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697,PodSandboxId:588ae9df297adf54f1470c4ccab2809b0280469689bbcb95fa0b580a116340be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765631192805207604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ae31683d0b65ea196103472695e50ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0,PodSandboxId:f776546868eaf2ee8a108614fa2cade081056bc694d87b89fb3ab75090c698d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765631192824698744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40f59c68bc1ab457c8ac3efb96ad62a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1,PodSandboxId:49619928c3b2b45beae713de444e4195b9a2b72d26de5b60ece9ac629088dab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765631192787344560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d07133aa7ef6081dfb5c
33d1096c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6,PodSandboxId:5a3edbbcfca145fdfcf8c1ed964191cfaeb4e723ff86e644b452724a8b51b386,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765631192776222969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b4ccda251d6114c3d01a6ec894549d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77dae93d-abb9-4d2b-9acf-35d24bde4375 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.236648335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e4e47dd-69e9-4ccc-86af-e8101f7198a3 name=/runtime.v1.RuntimeService/Version
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.236985438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e4e47dd-69e9-4ccc-86af-e8101f7198a3 name=/runtime.v1.RuntimeService/Version
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.238385639Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a224084f-95f1-4e5e-aaeb-989f8d94abdb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.240636031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765631466240600651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a224084f-95f1-4e5e-aaeb-989f8d94abdb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.241943198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f91276b-b1ae-4e59-8b4e-0a4fcde38dbb name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.242227734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f91276b-b1ae-4e59-8b4e-0a4fcde38dbb name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.242840561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6753fd85fcd829c813398ffdda13c27e0224e5ae8b0c212ea8c363b3d2555de8,PodSandboxId:0a7c55d23ca61b7aa4a13730e1600d23288bb5f6c21f98b36fb9f5efba23c869,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765631324333940208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9264c705-e985-4103-9edc-eaa92549670d,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12ed54bc37e8a810b2bcf11d2d5520536632e7ceef8b58e4e00ed0e45a1d793,PodSandboxId:a6a14dca33627c6c3a76de9eef91be13cef2750d4410e39091bf3c11dea67042,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765631296208329080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28996d9e-2b5f-4e3c-b142-b2a3308dd12c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1cdcb8f278f0b0ac210e671a5109dbe3dbf17cb852b675fca79a8e4d650ea7,PodSandboxId:c613d524d2eda396777a0be2c90ef5cec072a50f9c114d5f737863b9d4f0c230,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765631284681922883,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnvpr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba3cb1ed-ba8f-4a57-9ebc-48e9b1ca789c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecee2b5c3135febd56e2581fe7ce863b76c7becb1b2d6149bf37854b7b6c86a9,PodSandboxId:7e1023281f49143b11f685483f4f2992aa0ab69e0fc2d5a80c6695b5087209cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263481446227,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-df6ws,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e937ef53-6752-4fcf-b14d-7fc9c61e2822,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f69be2a0c8e8058841bcd906baff248a5298620bd9ad010dc18e106e3efb,PodSandboxId:8b98c26ee365e368565e4e9000251a027fdd0dad66543d8b5f56c8b994794ff1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263354804192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fj4wb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0daeae6-7a6e-4329-b7af-d85916aa1733,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c79972183e309ffabfb8539ffd6369672b8fe80156f4b37324c71183efe2377,PodSandboxId:084b3e2e81bd52415040bf457b5d939ecf54f9a31d55b29510a6a96ee98e7187,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765631237512834278,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7214f42-abd1-4b9a-a6a5-431cff38e423,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd3cedf932f2846fc9e7438c16ed4535e6a732725e690627fd53090376f70a1,PodSandboxId:2c84a2744aa6caedca2a609b6fb8b82940155c69a4493023410cb07c9c55d58e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765631221292006391,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl2f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10079e75-52ad-4ae2-97a6-8c8a76f8cd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da,PodSandboxId:86d0f1c9640695acd2f02a327a38911e1a52146632e001a91dda0c909a04784f,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765631212020303563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737,PodSandboxId:522a168346f8b9daebb875e37a2312f79a93b92c1d1ed2e8eda4b3bff11a258f,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765631205923967488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b623913-ee74-409f-a7bc-5fda744c8583,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd,PodSandboxId:faabe943f3b6cbc80c3fa80de4eed0665e1b012378e8819d103befad959ba547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765631204355252376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431fd1a1-aa6e-4095-af35-72087499f30a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697,PodSandboxId:588ae9df297adf54f1470c4ccab2809b0280469689bbcb95fa0b580a116340be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765631192805207604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ae31683d0b65ea196103472695e50ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0,PodSandboxId:f776546868eaf2ee8a108614fa2cade081056bc694d87b89fb3ab75090c698d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765631192824698744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40f59c68bc1ab457c8ac3efb96ad62a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1,PodSandboxId:49619928c3b2b45beae713de444e4195b9a2b72d26de5b60ece9ac629088dab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765631192787344560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d07133aa7ef6081dfb5c
33d1096c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6,PodSandboxId:5a3edbbcfca145fdfcf8c1ed964191cfaeb4e723ff86e644b452724a8b51b386,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765631192776222969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b4ccda251d6114c3d01a6ec894549d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f91276b-b1ae-4e59-8b4e-0a4fcde38dbb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6753fd85fcd82       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   0a7c55d23ca61       nginx                                       default
	e12ed54bc37e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   a6a14dca33627       busybox                                     default
	eb1cdcb8f278f       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   c613d524d2eda       ingress-nginx-controller-85d4c799dd-wnvpr   ingress-nginx
	ecee2b5c3135f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              patch                     0                   7e1023281f491       ingress-nginx-admission-patch-df6ws         ingress-nginx
	2a27f69be2a0c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   8b98c26ee365e       ingress-nginx-admission-create-fj4wb        ingress-nginx
	4c79972183e30       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   084b3e2e81bd5       kube-ingress-dns-minikube                   kube-system
	0bd3cedf932f2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   2c84a2744aa6c       amd-gpu-device-plugin-sl2f8                 kube-system
	24e857c13b182       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   86d0f1c964069       storage-provisioner                         kube-system
	beae040dada50       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   522a168346f8b       coredns-66bc5c9577-ztskd                    kube-system
	630eea4f4f055       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   faabe943f3b6c       kube-proxy-hlmj5                            kube-system
	309ba569e8bc0       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   f776546868eaf       kube-scheduler-addons-685870                kube-system
	0421f431c6c33       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   588ae9df297ad       etcd-addons-685870                          kube-system
	70281e43646fa       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   49619928c3b2b       kube-controller-manager-addons-685870       kube-system
	1f159075538cb       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   5a3edbbcfca14       kube-apiserver-addons-685870                kube-system
	
	
	==> coredns [beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737] <==
	[INFO] 10.244.0.8:59086 - 63200 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000294388s
	[INFO] 10.244.0.8:59086 - 64765 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00006935s
	[INFO] 10.244.0.8:59086 - 60941 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000065538s
	[INFO] 10.244.0.8:59086 - 14505 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008602s
	[INFO] 10.244.0.8:59086 - 24433 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000055498s
	[INFO] 10.244.0.8:59086 - 7702 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000082981s
	[INFO] 10.244.0.8:59086 - 54077 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000077065s
	[INFO] 10.244.0.8:54771 - 7096 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000129029s
	[INFO] 10.244.0.8:54771 - 7408 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000111744s
	[INFO] 10.244.0.8:57211 - 41080 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081259s
	[INFO] 10.244.0.8:57211 - 40819 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108437s
	[INFO] 10.244.0.8:58564 - 5912 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086482s
	[INFO] 10.244.0.8:58564 - 6175 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091929s
	[INFO] 10.244.0.8:57556 - 17781 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000095932s
	[INFO] 10.244.0.8:57556 - 18005 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101347s
	[INFO] 10.244.0.23:40332 - 2710 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000586979s
	[INFO] 10.244.0.23:55247 - 29803 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188394s
	[INFO] 10.244.0.23:60868 - 52799 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116926s
	[INFO] 10.244.0.23:52338 - 64823 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000184193s
	[INFO] 10.244.0.23:53118 - 54286 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089319s
	[INFO] 10.244.0.23:60500 - 58517 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012023s
	[INFO] 10.244.0.23:57103 - 50773 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001072713s
	[INFO] 10.244.0.23:54850 - 34747 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.00376797s
	[INFO] 10.244.0.28:59699 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000269278s
	[INFO] 10.244.0.28:58636 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000258104s
	
	
	==> describe nodes <==
	Name:               addons-685870
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-685870
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=addons-685870
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_06_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-685870
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:06:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-685870
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:11:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:09:12 +0000   Sat, 13 Dec 2025 13:06:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:09:12 +0000   Sat, 13 Dec 2025 13:06:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:09:12 +0000   Sat, 13 Dec 2025 13:06:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:09:12 +0000   Sat, 13 Dec 2025 13:06:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.155
	  Hostname:    addons-685870
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 2316754160b94d48b988554cdedf00bd
	  System UUID:                23167541-60b9-4d48-b988-554cdedf00bd
	  Boot ID:                    9ac98f12-9267-4c27-875b-a5744a9fc8da
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  default                     hello-world-app-5d498dc89-mvsgp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-wnvpr    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m14s
	  kube-system                 amd-gpu-device-plugin-sl2f8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 coredns-66bc5c9577-ztskd                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m22s
	  kube-system                 etcd-addons-685870                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m27s
	  kube-system                 kube-apiserver-addons-685870                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-controller-manager-addons-685870        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-hlmj5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-scheduler-addons-685870                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m21s  kube-proxy       
	  Normal  Starting                 4m27s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m27s  kubelet          Node addons-685870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s  kubelet          Node addons-685870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s  kubelet          Node addons-685870 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m26s  kubelet          Node addons-685870 status is now: NodeReady
	  Normal  RegisteredNode           4m23s  node-controller  Node addons-685870 event: Registered Node addons-685870 in Controller
	
	
	==> dmesg <==
	[  +0.766202] kauditd_printk_skb: 332 callbacks suppressed
	[  +0.364400] kauditd_printk_skb: 409 callbacks suppressed
	[Dec13 13:07] kauditd_printk_skb: 268 callbacks suppressed
	[  +6.808503] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.411617] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.965243] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.357546] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.122341] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.193663] kauditd_printk_skb: 192 callbacks suppressed
	[  +5.044751] kauditd_printk_skb: 80 callbacks suppressed
	[  +0.681574] kauditd_printk_skb: 99 callbacks suppressed
	[Dec13 13:08] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.000069] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.183224] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.885828] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.952614] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.254581] kauditd_printk_skb: 124 callbacks suppressed
	[  +0.743337] kauditd_printk_skb: 70 callbacks suppressed
	[  +3.017748] kauditd_printk_skb: 103 callbacks suppressed
	[Dec13 13:09] kauditd_printk_skb: 86 callbacks suppressed
	[  +0.709359] kauditd_printk_skb: 145 callbacks suppressed
	[  +0.695662] kauditd_printk_skb: 80 callbacks suppressed
	[  +7.670594] kauditd_printk_skb: 71 callbacks suppressed
	[ +10.663587] kauditd_printk_skb: 42 callbacks suppressed
	[Dec13 13:11] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697] <==
	{"level":"warn","ts":"2025-12-13T13:07:48.372050Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.831867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:07:48.372069Z","caller":"traceutil/trace.go:172","msg":"trace[1817313856] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1050; }","duration":"226.854414ms","start":"2025-12-13T13:07:48.145209Z","end":"2025-12-13T13:07:48.372063Z","steps":["trace[1817313856] 'range keys from in-memory index tree'  (duration: 226.78691ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:07:48.372080Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.120916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-12-13T13:07:48.372112Z","caller":"traceutil/trace.go:172","msg":"trace[1432393094] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1050; }","duration":"162.159609ms","start":"2025-12-13T13:07:48.209945Z","end":"2025-12-13T13:07:48.372105Z","steps":["trace[1432393094] 'range keys from in-memory index tree'  (duration: 162.021402ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:07:48.372147Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.443519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:07:48.372160Z","caller":"traceutil/trace.go:172","msg":"trace[423326700] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1050; }","duration":"224.456286ms","start":"2025-12-13T13:07:48.147700Z","end":"2025-12-13T13:07:48.372156Z","steps":["trace[423326700] 'range keys from in-memory index tree'  (duration: 224.395137ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:07:48.372225Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"222.402894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:07:48.372236Z","caller":"traceutil/trace.go:172","msg":"trace[760549697] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:1050; }","duration":"222.414706ms","start":"2025-12-13T13:07:48.149818Z","end":"2025-12-13T13:07:48.372233Z","steps":["trace[760549697] 'range keys from in-memory index tree'  (duration: 222.360662ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:08:00.188639Z","caller":"traceutil/trace.go:172","msg":"trace[968734986] linearizableReadLoop","detail":"{readStateIndex:1152; appliedIndex:1152; }","duration":"120.891062ms","start":"2025-12-13T13:08:00.067730Z","end":"2025-12-13T13:08:00.188621Z","steps":["trace[968734986] 'read index received'  (duration: 120.884304ms)","trace[968734986] 'applied index is now lower than readState.Index'  (duration: 5.889µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:08:00.188759Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.029428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csistoragecapacities\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:08:00.188779Z","caller":"traceutil/trace.go:172","msg":"trace[1659867722] range","detail":"{range_begin:/registry/csistoragecapacities; range_end:; response_count:0; response_revision:1119; }","duration":"121.062945ms","start":"2025-12-13T13:08:00.067710Z","end":"2025-12-13T13:08:00.188773Z","steps":["trace[1659867722] 'agreement among raft nodes before linearized reading'  (duration: 120.991989ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:08:00.190351Z","caller":"traceutil/trace.go:172","msg":"trace[1619384361] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"184.853714ms","start":"2025-12-13T13:08:00.005487Z","end":"2025-12-13T13:08:00.190341Z","steps":["trace[1619384361] 'process raft request'  (duration: 184.502586ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:08:08.707407Z","caller":"traceutil/trace.go:172","msg":"trace[2146368593] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"159.312268ms","start":"2025-12-13T13:08:08.548082Z","end":"2025-12-13T13:08:08.707395Z","steps":["trace[2146368593] 'process raft request'  (duration: 159.230951ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:08:36.417586Z","caller":"traceutil/trace.go:172","msg":"trace[1160569190] linearizableReadLoop","detail":"{readStateIndex:1365; appliedIndex:1365; }","duration":"277.928298ms","start":"2025-12-13T13:08:36.139641Z","end":"2025-12-13T13:08:36.417570Z","steps":["trace[1160569190] 'read index received'  (duration: 277.883654ms)","trace[1160569190] 'applied index is now lower than readState.Index'  (duration: 38.452µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:08:36.417686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"278.030099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:08:36.417704Z","caller":"traceutil/trace.go:172","msg":"trace[1349741951] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1325; }","duration":"278.062617ms","start":"2025-12-13T13:08:36.139637Z","end":"2025-12-13T13:08:36.417699Z","steps":["trace[1349741951] 'agreement among raft nodes before linearized reading'  (duration: 278.002858ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:08:36.417949Z","caller":"traceutil/trace.go:172","msg":"trace[1673974490] transaction","detail":"{read_only:false; response_revision:1326; number_of_response:1; }","duration":"284.552335ms","start":"2025-12-13T13:08:36.133390Z","end":"2025-12-13T13:08:36.417942Z","steps":["trace[1673974490] 'process raft request'  (duration: 284.477349ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:08:38.301429Z","caller":"traceutil/trace.go:172","msg":"trace[723761713] linearizableReadLoop","detail":"{readStateIndex:1368; appliedIndex:1368; }","duration":"213.250557ms","start":"2025-12-13T13:08:38.088160Z","end":"2025-12-13T13:08:38.301410Z","steps":["trace[723761713] 'read index received'  (duration: 213.245186ms)","trace[723761713] 'applied index is now lower than readState.Index'  (duration: 4.466µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:08:38.301598Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.423968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:08:38.301619Z","caller":"traceutil/trace.go:172","msg":"trace[1651104628] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1328; }","duration":"213.464316ms","start":"2025-12-13T13:08:38.088149Z","end":"2025-12-13T13:08:38.301613Z","steps":["trace[1651104628] 'agreement among raft nodes before linearized reading'  (duration: 213.34678ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:08:38.301786Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.175376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:08:38.301801Z","caller":"traceutil/trace.go:172","msg":"trace[1907552154] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1328; }","duration":"162.19275ms","start":"2025-12-13T13:08:38.139604Z","end":"2025-12-13T13:08:38.301796Z","steps":["trace[1907552154] 'agreement among raft nodes before linearized reading'  (duration: 162.16347ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:08:38.301932Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.230706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-12-13T13:08:38.301957Z","caller":"traceutil/trace.go:172","msg":"trace[143458453] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1328; }","duration":"126.254342ms","start":"2025-12-13T13:08:38.175697Z","end":"2025-12-13T13:08:38.301951Z","steps":["trace[143458453] 'agreement among raft nodes before linearized reading'  (duration: 126.185318ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:08:48.935623Z","caller":"traceutil/trace.go:172","msg":"trace[881859974] transaction","detail":"{read_only:false; response_revision:1429; number_of_response:1; }","duration":"141.627573ms","start":"2025-12-13T13:08:48.793982Z","end":"2025-12-13T13:08:48.935609Z","steps":["trace[881859974] 'process raft request'  (duration: 141.493027ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:11:06 up 4 min,  0 users,  load average: 0.54, 0.74, 0.37
	Linux addons-685870 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 13 11:18:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6] <==
	W1213 13:07:25.631892       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 13:07:25.631969       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 13:07:25.653143       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 13:08:23.278200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.155:8443->192.168.39.1:40120: use of closed network connection
	I1213 13:08:32.521353       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.217.207"}
	I1213 13:08:39.077453       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 13:08:39.269032       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.110.21"}
	I1213 13:08:49.166457       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 13:09:11.203697       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 13:09:11.203826       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 13:09:11.237504       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 13:09:11.237590       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 13:09:11.252410       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 13:09:11.252460       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 13:09:11.273991       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 13:09:11.274020       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 13:09:12.237607       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 13:09:12.274558       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1213 13:09:12.390364       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1213 13:09:19.886222       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1213 13:09:26.656036       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1213 13:11:05.126520       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.96.39"}
	
	
	==> kube-controller-manager [70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1] <==
	I1213 13:09:18.667017       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E1213 13:09:20.514817       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:09:20.516058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:09:21.710074       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:09:21.711238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:09:29.940159       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:09:29.941136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:09:32.812881       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:09:32.813834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:09:33.003359       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:09:33.004356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:09:46.436900       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:09:46.437857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:09:47.010669       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:09:47.011863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:09:54.325788       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:09:54.327005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:10:23.235285       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:10:23.236335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:10:23.607193       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:10:23.608415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:10:34.898142       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:10:34.899034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 13:11:06.546423       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 13:11:06.549020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd] <==
	I1213 13:06:44.587340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:06:44.687487       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:06:44.687630       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.155"]
	E1213 13:06:44.687694       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:06:44.734136       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 13:06:44.734261       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 13:06:44.734304       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:06:44.746384       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:06:44.746639       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:06:44.746679       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:06:44.751482       1 config.go:200] "Starting service config controller"
	I1213 13:06:44.751804       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:06:44.751849       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:06:44.751854       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:06:44.751903       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:06:44.751922       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:06:44.752869       1 config.go:309] "Starting node config controller"
	I1213 13:06:44.752896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:06:44.752903       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:06:44.852619       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:06:44.852742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:06:44.852763       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0] <==
	E1213 13:06:35.802213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 13:06:35.802227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:06:35.802379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:06:35.802399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:06:35.803783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:06:36.630820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:06:36.651338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:06:36.684227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:06:36.708960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:06:36.838862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:06:36.860507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:06:36.875175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:06:36.880683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:06:36.921225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 13:06:37.001879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:06:37.050619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:06:37.068650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:06:37.122456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:06:37.143086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:06:37.149814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:06:37.293733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:06:37.334237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:06:37.339682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:06:37.399050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1213 13:06:39.675227       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:09:39 addons-685870 kubelet[1499]: E1213 13:09:39.397882    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631379397456113 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:09:39 addons-685870 kubelet[1499]: E1213 13:09:39.397904    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631379397456113 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:09:40 addons-685870 kubelet[1499]: I1213 13:09:40.652310    1499 scope.go:117] "RemoveContainer" containerID="2dc739a2dec434c4f022ab40a2bd6308017a3dd4e26c72b3ee97b6d771585dac"
	Dec 13 13:09:40 addons-685870 kubelet[1499]: I1213 13:09:40.766656    1499 scope.go:117] "RemoveContainer" containerID="ad32e5c2130e95920569a26187dcf953b523385c634895b5a424cc551478823d"
	Dec 13 13:09:40 addons-685870 kubelet[1499]: I1213 13:09:40.892210    1499 scope.go:117] "RemoveContainer" containerID="51ce09200f4c86647e1461b8c1602837cef9dabffc683ad4a814f42c7f88c31a"
	Dec 13 13:09:41 addons-685870 kubelet[1499]: I1213 13:09:41.008335    1499 scope.go:117] "RemoveContainer" containerID="354338c937f694f67f6120f5d3ca5ad4bf590b9ec67506ee8dc7cb61b837078c"
	Dec 13 13:09:49 addons-685870 kubelet[1499]: E1213 13:09:49.402034    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631389401413721 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:09:49 addons-685870 kubelet[1499]: E1213 13:09:49.402075    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631389401413721 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:09:59 addons-685870 kubelet[1499]: E1213 13:09:59.404921    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631399404499127 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:09:59 addons-685870 kubelet[1499]: E1213 13:09:59.404963    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631399404499127 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:09 addons-685870 kubelet[1499]: E1213 13:10:09.409227    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631409408715537 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:09 addons-685870 kubelet[1499]: E1213 13:10:09.409267    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631409408715537 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:19 addons-685870 kubelet[1499]: E1213 13:10:19.412243    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631419411814343 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:19 addons-685870 kubelet[1499]: E1213 13:10:19.412501    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631419411814343 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:24 addons-685870 kubelet[1499]: I1213 13:10:24.080917    1499 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sl2f8" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:10:29 addons-685870 kubelet[1499]: E1213 13:10:29.415332    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631429414918172 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:29 addons-685870 kubelet[1499]: E1213 13:10:29.415410    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631429414918172 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:39 addons-685870 kubelet[1499]: E1213 13:10:39.419101    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631439418718821 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:39 addons-685870 kubelet[1499]: E1213 13:10:39.419144    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631439418718821 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:49 addons-685870 kubelet[1499]: E1213 13:10:49.421888    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631449421410809 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:49 addons-685870 kubelet[1499]: E1213 13:10:49.421913    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631449421410809 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:55 addons-685870 kubelet[1499]: I1213 13:10:55.081772    1499 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:10:59 addons-685870 kubelet[1499]: E1213 13:10:59.424205    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631459423737135 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:10:59 addons-685870 kubelet[1499]: E1213 13:10:59.424230    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631459423737135 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 13:11:05 addons-685870 kubelet[1499]: I1213 13:11:05.127063    1499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvmm\" (UniqueName: \"kubernetes.io/projected/029b48e2-9bbb-4c18-8728-e55e820b6f1e-kube-api-access-qpvmm\") pod \"hello-world-app-5d498dc89-mvsgp\" (UID: \"029b48e2-9bbb-4c18-8728-e55e820b6f1e\") " pod="default/hello-world-app-5d498dc89-mvsgp"
	
	
	==> storage-provisioner [24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da] <==
	W1213 13:10:41.489025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:43.492633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:43.498269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:45.501762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:45.508868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:47.511758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:47.516456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:49.520592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:49.526358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:51.530145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:51.535424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:53.539206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:53.547111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:55.551298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:55.556893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:57.559935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:57.568741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:59.572093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:10:59.577071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:11:01.580399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:11:01.587903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:11:03.590863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:11:03.596956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:11:05.601783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:11:05.612164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685870 -n addons-685870
helpers_test.go:270: (dbg) Run:  kubectl --context addons-685870 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-mvsgp ingress-nginx-admission-create-fj4wb ingress-nginx-admission-patch-df6ws
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-685870 describe pod hello-world-app-5d498dc89-mvsgp ingress-nginx-admission-create-fj4wb ingress-nginx-admission-patch-df6ws
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-685870 describe pod hello-world-app-5d498dc89-mvsgp ingress-nginx-admission-create-fj4wb ingress-nginx-admission-patch-df6ws: exit status 1 (75.894024ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-mvsgp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685870/192.168.39.155
	Start Time:       Sat, 13 Dec 2025 13:11:05 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qpvmm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qpvmm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-mvsgp to addons-685870
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fj4wb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-df6ws" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-685870 describe pod hello-world-app-5d498dc89-mvsgp ingress-nginx-admission-create-fj4wb ingress-nginx-admission-patch-df6ws: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 addons disable ingress-dns --alsologtostderr -v=1: (1.107631358s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 addons disable ingress --alsologtostderr -v=1: (7.845687684s)
--- FAIL: TestAddons/parallel/Ingress (157.55s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (1473.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 13:14:44.823590  135234 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101171 --alsologtostderr -v=8
E1213 13:15:56.022044  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:12.154719  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:39.864367  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:23:12.162947  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:28:12.153711  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-101171 --alsologtostderr -v=8: exit status 80 (13m55.20786573s)

                                                
                                                
-- stdout --
	* [functional-101171] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-101171" primary control-plane node in "functional-101171" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:14:44.880702  139657 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:14:44.880839  139657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:14:44.880850  139657 out.go:374] Setting ErrFile to fd 2...
	I1213 13:14:44.880858  139657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:14:44.881087  139657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 13:14:44.881551  139657 out.go:368] Setting JSON to false
	I1213 13:14:44.882447  139657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3425,"bootTime":1765628260,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:14:44.882501  139657 start.go:143] virtualization: kvm guest
	I1213 13:14:44.884268  139657 out.go:179] * [functional-101171] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:14:44.885270  139657 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:14:44.885307  139657 notify.go:221] Checking for updates...
	I1213 13:14:44.887088  139657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:14:44.888140  139657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 13:14:44.889099  139657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:14:44.890102  139657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:14:44.891038  139657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:14:44.892542  139657 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:14:44.892673  139657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:14:44.927435  139657 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 13:14:44.928372  139657 start.go:309] selected driver: kvm2
	I1213 13:14:44.928386  139657 start.go:927] validating driver "kvm2" against &{Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:14:44.928499  139657 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:14:44.929402  139657 cni.go:84] Creating CNI manager for ""
	I1213 13:14:44.929464  139657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:14:44.929513  139657 start.go:353] cluster config:
	{Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:14:44.929611  139657 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:14:44.930834  139657 out.go:179] * Starting "functional-101171" primary control-plane node in "functional-101171" cluster
	I1213 13:14:44.931691  139657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:14:44.931725  139657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:14:44.931737  139657 cache.go:65] Caching tarball of preloaded images
	I1213 13:14:44.931865  139657 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:14:44.931879  139657 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:14:44.931980  139657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/config.json ...
	I1213 13:14:44.932230  139657 start.go:360] acquireMachinesLock for functional-101171: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 13:14:44.932293  139657 start.go:364] duration metric: took 38.36µs to acquireMachinesLock for "functional-101171"
	I1213 13:14:44.932313  139657 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:14:44.932324  139657 fix.go:54] fixHost starting: 
	I1213 13:14:44.933932  139657 fix.go:112] recreateIfNeeded on functional-101171: state=Running err=<nil>
	W1213 13:14:44.933963  139657 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:14:44.935205  139657 out.go:252] * Updating the running kvm2 "functional-101171" VM ...
	I1213 13:14:44.935228  139657 machine.go:94] provisionDockerMachine start ...
	I1213 13:14:44.937452  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:44.937806  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:44.937835  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:44.938001  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:44.938338  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:44.938355  139657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:14:45.046797  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101171
	
	I1213 13:14:45.046826  139657 buildroot.go:166] provisioning hostname "functional-101171"
	I1213 13:14:45.049877  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.050321  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.050355  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.050541  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.050782  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.050798  139657 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101171 && echo "functional-101171" | sudo tee /etc/hostname
	I1213 13:14:45.172748  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101171
	
	I1213 13:14:45.175509  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.175971  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.176008  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.176182  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.176385  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.176400  139657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101171' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101171/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101171' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:14:45.281039  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:14:45.281099  139657 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
	I1213 13:14:45.281128  139657 buildroot.go:174] setting up certificates
	I1213 13:14:45.281147  139657 provision.go:84] configureAuth start
	I1213 13:14:45.283949  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.284380  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.284418  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.286705  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.287058  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.287116  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.287256  139657 provision.go:143] copyHostCerts
	I1213 13:14:45.287299  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 13:14:45.287346  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem, removing ...
	I1213 13:14:45.287365  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 13:14:45.287454  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
	I1213 13:14:45.287580  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 13:14:45.287614  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem, removing ...
	I1213 13:14:45.287625  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 13:14:45.287672  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
	I1213 13:14:45.287766  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 13:14:45.287791  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem, removing ...
	I1213 13:14:45.287797  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 13:14:45.287842  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
	I1213 13:14:45.287926  139657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.functional-101171 san=[127.0.0.1 192.168.39.124 functional-101171 localhost minikube]
	I1213 13:14:45.423318  139657 provision.go:177] copyRemoteCerts
	I1213 13:14:45.423403  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:14:45.425911  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.426340  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.426370  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.426502  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:45.512848  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 13:14:45.512952  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:14:45.542724  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 13:14:45.542812  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:14:45.571677  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 13:14:45.571772  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:14:45.601284  139657 provision.go:87] duration metric: took 320.120369ms to configureAuth
	I1213 13:14:45.601314  139657 buildroot.go:189] setting minikube options for container-runtime
	I1213 13:14:45.601491  139657 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:14:45.604379  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.604741  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.604764  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.604932  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.605181  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.605200  139657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:14:51.168422  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:14:51.168457  139657 machine.go:97] duration metric: took 6.233220346s to provisionDockerMachine
	I1213 13:14:51.168486  139657 start.go:293] postStartSetup for "functional-101171" (driver="kvm2")
	I1213 13:14:51.168502  139657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:14:51.168611  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:14:51.171649  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.172012  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.172099  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.172264  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.256552  139657 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:14:51.261415  139657 command_runner.go:130] > NAME=Buildroot
	I1213 13:14:51.261442  139657 command_runner.go:130] > VERSION=2025.02-dirty
	I1213 13:14:51.261446  139657 command_runner.go:130] > ID=buildroot
	I1213 13:14:51.261450  139657 command_runner.go:130] > VERSION_ID=2025.02
	I1213 13:14:51.261455  139657 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1213 13:14:51.261540  139657 info.go:137] Remote host: Buildroot 2025.02
	I1213 13:14:51.261567  139657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
	I1213 13:14:51.261651  139657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
	I1213 13:14:51.261758  139657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> 1352342.pem in /etc/ssl/certs
	I1213 13:14:51.261772  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> /etc/ssl/certs/1352342.pem
	I1213 13:14:51.261876  139657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts -> hosts in /etc/test/nested/copy/135234
	I1213 13:14:51.261886  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts -> /etc/test/nested/copy/135234/hosts
	I1213 13:14:51.261944  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/135234
	I1213 13:14:51.275404  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 13:14:51.304392  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts --> /etc/test/nested/copy/135234/hosts (40 bytes)
	I1213 13:14:51.390782  139657 start.go:296] duration metric: took 222.277729ms for postStartSetup
	I1213 13:14:51.390831  139657 fix.go:56] duration metric: took 6.458506569s for fixHost
	I1213 13:14:51.394087  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.394507  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.394539  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.394733  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:51.395032  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:51.395048  139657 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 13:14:51.547616  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765631691.540521728
	
	I1213 13:14:51.547640  139657 fix.go:216] guest clock: 1765631691.540521728
	I1213 13:14:51.547663  139657 fix.go:229] Guest: 2025-12-13 13:14:51.540521728 +0000 UTC Remote: 2025-12-13 13:14:51.390838299 +0000 UTC m=+6.561594252 (delta=149.683429ms)
	I1213 13:14:51.547685  139657 fix.go:200] guest clock delta is within tolerance: 149.683429ms
	I1213 13:14:51.547691  139657 start.go:83] releasing machines lock for "functional-101171", held for 6.615387027s
	I1213 13:14:51.550620  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.551093  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.551134  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.551858  139657 ssh_runner.go:195] Run: cat /version.json
	I1213 13:14:51.551895  139657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:14:51.555225  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555396  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555679  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.555709  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555901  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.555915  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.555948  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.556188  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.711392  139657 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 13:14:51.711480  139657 command_runner.go:130] > {"iso_version": "v1.37.0-1765613186-22122", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "89f69959280ebeefd164cfeba1f5b84c6f004bc9"}
	I1213 13:14:51.711625  139657 ssh_runner.go:195] Run: systemctl --version
	I1213 13:14:51.721211  139657 command_runner.go:130] > systemd 256 (256.7)
	I1213 13:14:51.721261  139657 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1213 13:14:51.721342  139657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:14:51.928878  139657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 13:14:51.943312  139657 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 13:14:51.943381  139657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:14:51.943457  139657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:14:51.961133  139657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:14:51.961160  139657 start.go:496] detecting cgroup driver to use...
	I1213 13:14:51.961234  139657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:14:52.008684  139657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:14:52.058685  139657 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:14:52.058767  139657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:14:52.099652  139657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:14:52.129214  139657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:14:52.454020  139657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:14:52.731152  139657 docker.go:234] disabling docker service ...
	I1213 13:14:52.731233  139657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:14:52.789926  139657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:14:52.807635  139657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:14:53.089730  139657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:14:53.328299  139657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:14:53.351747  139657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:14:53.384802  139657 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 13:14:53.384876  139657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:14:53.385004  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.402675  139657 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 13:14:53.402773  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.425941  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.444350  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.459025  139657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:14:53.488518  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.515384  139657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.531334  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.545103  139657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:14:53.555838  139657 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 13:14:53.556273  139657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:14:53.567831  139657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:14:53.751704  139657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:16:24.195369  139657 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.443610327s)
	I1213 13:16:24.195422  139657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:16:24.195496  139657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:16:24.201208  139657 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 13:16:24.201250  139657 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 13:16:24.201260  139657 command_runner.go:130] > Device: 0,23	Inode: 1994        Links: 1
	I1213 13:16:24.201270  139657 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 13:16:24.201277  139657 command_runner.go:130] > Access: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201287  139657 command_runner.go:130] > Modify: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201293  139657 command_runner.go:130] > Change: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201298  139657 command_runner.go:130] >  Birth: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201336  139657 start.go:564] Will wait 60s for crictl version
	I1213 13:16:24.201389  139657 ssh_runner.go:195] Run: which crictl
	I1213 13:16:24.205825  139657 command_runner.go:130] > /usr/bin/crictl
	I1213 13:16:24.205969  139657 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 13:16:24.240544  139657 command_runner.go:130] > Version:  0.1.0
	I1213 13:16:24.240566  139657 command_runner.go:130] > RuntimeName:  cri-o
	I1213 13:16:24.240571  139657 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1213 13:16:24.240576  139657 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 13:16:24.240600  139657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 13:16:24.240739  139657 ssh_runner.go:195] Run: crio --version
	I1213 13:16:24.274046  139657 command_runner.go:130] > crio version 1.29.1
	I1213 13:16:24.274084  139657 command_runner.go:130] > Version:        1.29.1
	I1213 13:16:24.274090  139657 command_runner.go:130] > GitCommit:      unknown
	I1213 13:16:24.274094  139657 command_runner.go:130] > GitCommitDate:  unknown
	I1213 13:16:24.274098  139657 command_runner.go:130] > GitTreeState:   clean
	I1213 13:16:24.274104  139657 command_runner.go:130] > BuildDate:      2025-12-13T11:21:09Z
	I1213 13:16:24.274108  139657 command_runner.go:130] > GoVersion:      go1.25.5
	I1213 13:16:24.274112  139657 command_runner.go:130] > Compiler:       gc
	I1213 13:16:24.274115  139657 command_runner.go:130] > Platform:       linux/amd64
	I1213 13:16:24.274119  139657 command_runner.go:130] > Linkmode:       dynamic
	I1213 13:16:24.274126  139657 command_runner.go:130] > BuildTags:      
	I1213 13:16:24.274131  139657 command_runner.go:130] >   containers_image_ostree_stub
	I1213 13:16:24.274135  139657 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1213 13:16:24.274138  139657 command_runner.go:130] >   btrfs_noversion
	I1213 13:16:24.274143  139657 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1213 13:16:24.274150  139657 command_runner.go:130] >   libdm_no_deferred_remove
	I1213 13:16:24.274153  139657 command_runner.go:130] >   seccomp
	I1213 13:16:24.274158  139657 command_runner.go:130] > LDFlags:          unknown
	I1213 13:16:24.274162  139657 command_runner.go:130] > SeccompEnabled:   true
	I1213 13:16:24.274166  139657 command_runner.go:130] > AppArmorEnabled:  false
	I1213 13:16:24.274253  139657 ssh_runner.go:195] Run: crio --version
	I1213 13:16:24.307345  139657 command_runner.go:130] > crio version 1.29.1
	I1213 13:16:24.307372  139657 command_runner.go:130] > Version:        1.29.1
	I1213 13:16:24.307385  139657 command_runner.go:130] > GitCommit:      unknown
	I1213 13:16:24.307390  139657 command_runner.go:130] > GitCommitDate:  unknown
	I1213 13:16:24.307394  139657 command_runner.go:130] > GitTreeState:   clean
	I1213 13:16:24.307400  139657 command_runner.go:130] > BuildDate:      2025-12-13T11:21:09Z
	I1213 13:16:24.307406  139657 command_runner.go:130] > GoVersion:      go1.25.5
	I1213 13:16:24.307412  139657 command_runner.go:130] > Compiler:       gc
	I1213 13:16:24.307419  139657 command_runner.go:130] > Platform:       linux/amd64
	I1213 13:16:24.307425  139657 command_runner.go:130] > Linkmode:       dynamic
	I1213 13:16:24.307436  139657 command_runner.go:130] > BuildTags:      
	I1213 13:16:24.307444  139657 command_runner.go:130] >   containers_image_ostree_stub
	I1213 13:16:24.307453  139657 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1213 13:16:24.307458  139657 command_runner.go:130] >   btrfs_noversion
	I1213 13:16:24.307462  139657 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1213 13:16:24.307468  139657 command_runner.go:130] >   libdm_no_deferred_remove
	I1213 13:16:24.307472  139657 command_runner.go:130] >   seccomp
	I1213 13:16:24.307476  139657 command_runner.go:130] > LDFlags:          unknown
	I1213 13:16:24.307481  139657 command_runner.go:130] > SeccompEnabled:   true
	I1213 13:16:24.307484  139657 command_runner.go:130] > AppArmorEnabled:  false
	I1213 13:16:24.309954  139657 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 13:16:24.314441  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:16:24.314910  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:16:24.314934  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:16:24.315179  139657 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 13:16:24.320471  139657 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1213 13:16:24.320604  139657 kubeadm.go:884] updating cluster {Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:16:24.320792  139657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:16:24.320856  139657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:16:24.358340  139657 command_runner.go:130] > {
	I1213 13:16:24.358367  139657 command_runner.go:130] >   "images":  [
	I1213 13:16:24.358373  139657 command_runner.go:130] >     {
	I1213 13:16:24.358385  139657 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1213 13:16:24.358391  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358399  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 13:16:24.358414  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358422  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358433  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 13:16:24.358445  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1213 13:16:24.358469  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358478  139657 command_runner.go:130] >       "size":  "109379124",
	I1213 13:16:24.358484  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358497  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358504  139657 command_runner.go:130] >     },
	I1213 13:16:24.358509  139657 command_runner.go:130] >     {
	I1213 13:16:24.358519  139657 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1213 13:16:24.358529  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358538  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 13:16:24.358548  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358553  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358565  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1213 13:16:24.358580  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1213 13:16:24.358591  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358598  139657 command_runner.go:130] >       "size":  "31470524",
	I1213 13:16:24.358604  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358617  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358623  139657 command_runner.go:130] >     },
	I1213 13:16:24.358626  139657 command_runner.go:130] >     {
	I1213 13:16:24.358634  139657 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1213 13:16:24.358644  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358653  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1213 13:16:24.358661  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358668  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358685  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1213 13:16:24.358707  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1213 13:16:24.358715  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358721  139657 command_runner.go:130] >       "size":  "76103547",
	I1213 13:16:24.358731  139657 command_runner.go:130] >       "username":  "nonroot",
	I1213 13:16:24.358737  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358744  139657 command_runner.go:130] >     },
	I1213 13:16:24.358748  139657 command_runner.go:130] >     {
	I1213 13:16:24.358757  139657 command_runner.go:130] >       "id":  "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1",
	I1213 13:16:24.358770  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358779  139657 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 13:16:24.358784  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358793  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358810  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 13:16:24.358823  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"
	I1213 13:16:24.358828  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358834  139657 command_runner.go:130] >       "size":  "63585106",
	I1213 13:16:24.358840  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.358849  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.358855  139657 command_runner.go:130] >       },
	I1213 13:16:24.358875  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358883  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358889  139657 command_runner.go:130] >     },
	I1213 13:16:24.358896  139657 command_runner.go:130] >     {
	I1213 13:16:24.358905  139657 command_runner.go:130] >       "id":  "a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85",
	I1213 13:16:24.358911  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358918  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.2"
	I1213 13:16:24.358926  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358933  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358946  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077",
	I1213 13:16:24.358960  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"
	I1213 13:16:24.358967  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358974  139657 command_runner.go:130] >       "size":  "89046001",
	I1213 13:16:24.358982  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.358987  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.358995  139657 command_runner.go:130] >       },
	I1213 13:16:24.359001  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359010  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359016  139657 command_runner.go:130] >     },
	I1213 13:16:24.359025  139657 command_runner.go:130] >     {
	I1213 13:16:24.359035  139657 command_runner.go:130] >       "id":  "01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8",
	I1213 13:16:24.359045  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359060  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.2"
	I1213 13:16:24.359103  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359117  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359130  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb",
	I1213 13:16:24.359145  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"
	I1213 13:16:24.359151  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359158  139657 command_runner.go:130] >       "size":  "76004183",
	I1213 13:16:24.359164  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359169  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.359177  139657 command_runner.go:130] >       },
	I1213 13:16:24.359182  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359190  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359196  139657 command_runner.go:130] >     },
	I1213 13:16:24.359201  139657 command_runner.go:130] >     {
	I1213 13:16:24.359218  139657 command_runner.go:130] >       "id":  "8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45",
	I1213 13:16:24.359228  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359235  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.2"
	I1213 13:16:24.359243  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359251  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359266  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74",
	I1213 13:16:24.359281  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"
	I1213 13:16:24.359291  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359298  139657 command_runner.go:130] >       "size":  "73145240",
	I1213 13:16:24.359307  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359314  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359323  139657 command_runner.go:130] >     },
	I1213 13:16:24.359328  139657 command_runner.go:130] >     {
	I1213 13:16:24.359338  139657 command_runner.go:130] >       "id":  "88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952",
	I1213 13:16:24.359344  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359350  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.2"
	I1213 13:16:24.359355  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359359  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359366  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6",
	I1213 13:16:24.359407  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"
	I1213 13:16:24.359414  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359418  139657 command_runner.go:130] >       "size":  "53848919",
	I1213 13:16:24.359422  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359425  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.359428  139657 command_runner.go:130] >       },
	I1213 13:16:24.359432  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359439  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359442  139657 command_runner.go:130] >     },
	I1213 13:16:24.359445  139657 command_runner.go:130] >     {
	I1213 13:16:24.359453  139657 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1213 13:16:24.359457  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359463  139657 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.359466  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359470  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359478  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 13:16:24.359485  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1213 13:16:24.359490  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359494  139657 command_runner.go:130] >       "size":  "742092",
	I1213 13:16:24.359497  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359501  139657 command_runner.go:130] >         "value":  "65535"
	I1213 13:16:24.359506  139657 command_runner.go:130] >       },
	I1213 13:16:24.359510  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359514  139657 command_runner.go:130] >       "pinned":  true
	I1213 13:16:24.359519  139657 command_runner.go:130] >     }
	I1213 13:16:24.359522  139657 command_runner.go:130] >   ]
	I1213 13:16:24.359525  139657 command_runner.go:130] > }
	I1213 13:16:24.360333  139657 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:16:24.360355  139657 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:16:24.360418  139657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:16:24.392193  139657 command_runner.go:130] > {
	I1213 13:16:24.392217  139657 command_runner.go:130] >   "images":  [
	I1213 13:16:24.392221  139657 command_runner.go:130] >     {
	I1213 13:16:24.392229  139657 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1213 13:16:24.392236  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392246  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 13:16:24.392257  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392268  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392284  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 13:16:24.392297  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1213 13:16:24.392305  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392314  139657 command_runner.go:130] >       "size":  "109379124",
	I1213 13:16:24.392328  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392335  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392339  139657 command_runner.go:130] >     },
	I1213 13:16:24.392344  139657 command_runner.go:130] >     {
	I1213 13:16:24.392351  139657 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1213 13:16:24.392357  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392364  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 13:16:24.392372  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392379  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392393  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1213 13:16:24.392409  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1213 13:16:24.392417  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392423  139657 command_runner.go:130] >       "size":  "31470524",
	I1213 13:16:24.392430  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392438  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392443  139657 command_runner.go:130] >     },
	I1213 13:16:24.392447  139657 command_runner.go:130] >     {
	I1213 13:16:24.392456  139657 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1213 13:16:24.392462  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392467  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1213 13:16:24.392472  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392478  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392492  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1213 13:16:24.392507  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1213 13:16:24.392518  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392527  139657 command_runner.go:130] >       "size":  "76103547",
	I1213 13:16:24.392537  139657 command_runner.go:130] >       "username":  "nonroot",
	I1213 13:16:24.392545  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392548  139657 command_runner.go:130] >     },
	I1213 13:16:24.392551  139657 command_runner.go:130] >     {
	I1213 13:16:24.392557  139657 command_runner.go:130] >       "id":  "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1",
	I1213 13:16:24.392564  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392579  139657 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 13:16:24.392592  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392603  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392617  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 13:16:24.392633  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"
	I1213 13:16:24.392645  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392654  139657 command_runner.go:130] >       "size":  "63585106",
	I1213 13:16:24.392663  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392673  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392679  139657 command_runner.go:130] >       },
	I1213 13:16:24.392690  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392698  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392706  139657 command_runner.go:130] >     },
	I1213 13:16:24.392712  139657 command_runner.go:130] >     {
	I1213 13:16:24.392724  139657 command_runner.go:130] >       "id":  "a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85",
	I1213 13:16:24.392734  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392746  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.2"
	I1213 13:16:24.392754  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392761  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392775  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077",
	I1213 13:16:24.392788  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"
	I1213 13:16:24.392794  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392800  139657 command_runner.go:130] >       "size":  "89046001",
	I1213 13:16:24.392808  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392818  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392826  139657 command_runner.go:130] >       },
	I1213 13:16:24.392833  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392843  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392852  139657 command_runner.go:130] >     },
	I1213 13:16:24.392856  139657 command_runner.go:130] >     {
	I1213 13:16:24.392868  139657 command_runner.go:130] >       "id":  "01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8",
	I1213 13:16:24.392876  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392888  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.2"
	I1213 13:16:24.392895  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392909  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392924  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb",
	I1213 13:16:24.392940  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"
	I1213 13:16:24.392949  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392959  139657 command_runner.go:130] >       "size":  "76004183",
	I1213 13:16:24.392967  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392977  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392985  139657 command_runner.go:130] >       },
	I1213 13:16:24.392992  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393001  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393007  139657 command_runner.go:130] >     },
	I1213 13:16:24.393011  139657 command_runner.go:130] >     {
	I1213 13:16:24.393021  139657 command_runner.go:130] >       "id":  "8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45",
	I1213 13:16:24.393031  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393042  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.2"
	I1213 13:16:24.393048  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393058  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393089  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74",
	I1213 13:16:24.393113  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"
	I1213 13:16:24.393119  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393123  139657 command_runner.go:130] >       "size":  "73145240",
	I1213 13:16:24.393133  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393140  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393145  139657 command_runner.go:130] >     },
	I1213 13:16:24.393150  139657 command_runner.go:130] >     {
	I1213 13:16:24.393160  139657 command_runner.go:130] >       "id":  "88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952",
	I1213 13:16:24.393167  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393174  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.2"
	I1213 13:16:24.393179  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393186  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393197  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6",
	I1213 13:16:24.393226  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"
	I1213 13:16:24.393232  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393246  139657 command_runner.go:130] >       "size":  "53848919",
	I1213 13:16:24.393251  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.393257  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.393262  139657 command_runner.go:130] >       },
	I1213 13:16:24.393267  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393274  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393281  139657 command_runner.go:130] >     },
	I1213 13:16:24.393286  139657 command_runner.go:130] >     {
	I1213 13:16:24.393296  139657 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1213 13:16:24.393300  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393305  139657 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.393311  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393319  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393333  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 13:16:24.393349  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1213 13:16:24.393357  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393367  139657 command_runner.go:130] >       "size":  "742092",
	I1213 13:16:24.393376  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.393383  139657 command_runner.go:130] >         "value":  "65535"
	I1213 13:16:24.393390  139657 command_runner.go:130] >       },
	I1213 13:16:24.393396  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393405  139657 command_runner.go:130] >       "pinned":  true
	I1213 13:16:24.393408  139657 command_runner.go:130] >     }
	I1213 13:16:24.393416  139657 command_runner.go:130] >   ]
	I1213 13:16:24.393422  139657 command_runner.go:130] > }
	I1213 13:16:24.393572  139657 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:16:24.393595  139657 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:16:24.393606  139657 kubeadm.go:935] updating node { 192.168.39.124 8441 v1.34.2 crio true true} ...
	I1213 13:16:24.393771  139657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101171 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:16:24.393855  139657 ssh_runner.go:195] Run: crio config
	I1213 13:16:24.427284  139657 command_runner.go:130] ! time="2025-12-13 13:16:24.422256723Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1213 13:16:24.433797  139657 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 13:16:24.439545  139657 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 13:16:24.439572  139657 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 13:16:24.439581  139657 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 13:16:24.439585  139657 command_runner.go:130] > #
	I1213 13:16:24.439594  139657 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 13:16:24.439602  139657 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 13:16:24.439611  139657 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 13:16:24.439629  139657 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 13:16:24.439638  139657 command_runner.go:130] > # reload'.
	I1213 13:16:24.439648  139657 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 13:16:24.439661  139657 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 13:16:24.439675  139657 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 13:16:24.439687  139657 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 13:16:24.439693  139657 command_runner.go:130] > [crio]
	I1213 13:16:24.439704  139657 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 13:16:24.439712  139657 command_runner.go:130] > # containers images, in this directory.
	I1213 13:16:24.439720  139657 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1213 13:16:24.439738  139657 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 13:16:24.439749  139657 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1213 13:16:24.439761  139657 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 13:16:24.439771  139657 command_runner.go:130] > # imagestore = ""
	I1213 13:16:24.439781  139657 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 13:16:24.439794  139657 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 13:16:24.439803  139657 command_runner.go:130] > # storage_driver = "overlay"
	I1213 13:16:24.439813  139657 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 13:16:24.439825  139657 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 13:16:24.439832  139657 command_runner.go:130] > storage_option = [
	I1213 13:16:24.439844  139657 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1213 13:16:24.439852  139657 command_runner.go:130] > ]
	I1213 13:16:24.439861  139657 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 13:16:24.439872  139657 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 13:16:24.439882  139657 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 13:16:24.439891  139657 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 13:16:24.439911  139657 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 13:16:24.439921  139657 command_runner.go:130] > # always happen on a node reboot
	I1213 13:16:24.439930  139657 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 13:16:24.439952  139657 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 13:16:24.439965  139657 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 13:16:24.439979  139657 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 13:16:24.439990  139657 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1213 13:16:24.440002  139657 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 13:16:24.440018  139657 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 13:16:24.440026  139657 command_runner.go:130] > # internal_wipe = true
	I1213 13:16:24.440039  139657 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 13:16:24.440051  139657 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 13:16:24.440059  139657 command_runner.go:130] > # internal_repair = false
	I1213 13:16:24.440068  139657 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 13:16:24.440095  139657 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 13:16:24.440115  139657 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 13:16:24.440127  139657 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 13:16:24.440141  139657 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 13:16:24.440150  139657 command_runner.go:130] > [crio.api]
	I1213 13:16:24.440158  139657 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 13:16:24.440169  139657 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 13:16:24.440178  139657 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 13:16:24.440188  139657 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 13:16:24.440198  139657 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 13:16:24.440210  139657 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 13:16:24.440217  139657 command_runner.go:130] > # stream_port = "0"
	I1213 13:16:24.440227  139657 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 13:16:24.440235  139657 command_runner.go:130] > # stream_enable_tls = false
	I1213 13:16:24.440245  139657 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 13:16:24.440256  139657 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 13:16:24.440267  139657 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 13:16:24.440289  139657 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1213 13:16:24.440298  139657 command_runner.go:130] > # minutes.
	I1213 13:16:24.440313  139657 command_runner.go:130] > # stream_tls_cert = ""
	I1213 13:16:24.440341  139657 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 13:16:24.440355  139657 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1213 13:16:24.440363  139657 command_runner.go:130] > # stream_tls_key = ""
	I1213 13:16:24.440375  139657 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 13:16:24.440386  139657 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 13:16:24.440416  139657 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1213 13:16:24.440425  139657 command_runner.go:130] > # stream_tls_ca = ""
	I1213 13:16:24.440437  139657 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 13:16:24.440447  139657 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1213 13:16:24.440460  139657 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 13:16:24.440470  139657 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1213 13:16:24.440480  139657 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 13:16:24.440492  139657 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 13:16:24.440498  139657 command_runner.go:130] > [crio.runtime]
	I1213 13:16:24.440510  139657 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 13:16:24.440519  139657 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 13:16:24.440528  139657 command_runner.go:130] > # "nofile=1024:2048"
	I1213 13:16:24.440538  139657 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 13:16:24.440547  139657 command_runner.go:130] > # default_ulimits = [
	I1213 13:16:24.440553  139657 command_runner.go:130] > # ]
	I1213 13:16:24.440565  139657 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 13:16:24.440572  139657 command_runner.go:130] > # no_pivot = false
	I1213 13:16:24.440582  139657 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 13:16:24.440592  139657 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 13:16:24.440603  139657 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 13:16:24.440612  139657 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 13:16:24.440623  139657 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 13:16:24.440635  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 13:16:24.440644  139657 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1213 13:16:24.440652  139657 command_runner.go:130] > # Cgroup setting for conmon
	I1213 13:16:24.440664  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 13:16:24.440672  139657 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 13:16:24.440690  139657 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 13:16:24.440701  139657 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 13:16:24.440713  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 13:16:24.440726  139657 command_runner.go:130] > conmon_env = [
	I1213 13:16:24.440736  139657 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1213 13:16:24.440743  139657 command_runner.go:130] > ]
	I1213 13:16:24.440753  139657 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 13:16:24.440764  139657 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 13:16:24.440774  139657 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 13:16:24.440783  139657 command_runner.go:130] > # default_env = [
	I1213 13:16:24.440788  139657 command_runner.go:130] > # ]
	I1213 13:16:24.440801  139657 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 13:16:24.440813  139657 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 13:16:24.440822  139657 command_runner.go:130] > # selinux = false
	I1213 13:16:24.440831  139657 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 13:16:24.440844  139657 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1213 13:16:24.440853  139657 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1213 13:16:24.440860  139657 command_runner.go:130] > # seccomp_profile = ""
	I1213 13:16:24.440868  139657 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1213 13:16:24.440877  139657 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1213 13:16:24.440888  139657 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1213 13:16:24.440896  139657 command_runner.go:130] > # which might increase security.
	I1213 13:16:24.440904  139657 command_runner.go:130] > # This option is currently deprecated,
	I1213 13:16:24.440914  139657 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1213 13:16:24.440925  139657 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1213 13:16:24.440935  139657 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 13:16:24.440949  139657 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 13:16:24.440961  139657 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 13:16:24.440972  139657 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 13:16:24.440982  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.440989  139657 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 13:16:24.441001  139657 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 13:16:24.441008  139657 command_runner.go:130] > # the cgroup blockio controller.
	I1213 13:16:24.441025  139657 command_runner.go:130] > # blockio_config_file = ""
	I1213 13:16:24.441040  139657 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 13:16:24.441047  139657 command_runner.go:130] > # blockio parameters.
	I1213 13:16:24.441054  139657 command_runner.go:130] > # blockio_reload = false
	I1213 13:16:24.441065  139657 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 13:16:24.441088  139657 command_runner.go:130] > # irqbalance daemon.
	I1213 13:16:24.441100  139657 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 13:16:24.441116  139657 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 13:16:24.441138  139657 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 13:16:24.441152  139657 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 13:16:24.441171  139657 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 13:16:24.441183  139657 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 13:16:24.441194  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.441201  139657 command_runner.go:130] > # rdt_config_file = ""
	I1213 13:16:24.441210  139657 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 13:16:24.441217  139657 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 13:16:24.441272  139657 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 13:16:24.441283  139657 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 13:16:24.441291  139657 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 13:16:24.441300  139657 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 13:16:24.441306  139657 command_runner.go:130] > # will be added.
	I1213 13:16:24.441314  139657 command_runner.go:130] > # default_capabilities = [
	I1213 13:16:24.441320  139657 command_runner.go:130] > # 	"CHOWN",
	I1213 13:16:24.441328  139657 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 13:16:24.441334  139657 command_runner.go:130] > # 	"FSETID",
	I1213 13:16:24.441341  139657 command_runner.go:130] > # 	"FOWNER",
	I1213 13:16:24.441347  139657 command_runner.go:130] > # 	"SETGID",
	I1213 13:16:24.441355  139657 command_runner.go:130] > # 	"SETUID",
	I1213 13:16:24.441361  139657 command_runner.go:130] > # 	"SETPCAP",
	I1213 13:16:24.441368  139657 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 13:16:24.441375  139657 command_runner.go:130] > # 	"KILL",
	I1213 13:16:24.441381  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441394  139657 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 13:16:24.441414  139657 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 13:16:24.441425  139657 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 13:16:24.441436  139657 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 13:16:24.441449  139657 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 13:16:24.441457  139657 command_runner.go:130] > default_sysctls = [
	I1213 13:16:24.441465  139657 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 13:16:24.441471  139657 command_runner.go:130] > ]
	I1213 13:16:24.441479  139657 command_runner.go:130] > # List of devices on the host that a
	I1213 13:16:24.441492  139657 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 13:16:24.441499  139657 command_runner.go:130] > # allowed_devices = [
	I1213 13:16:24.441514  139657 command_runner.go:130] > # 	"/dev/fuse",
	I1213 13:16:24.441521  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441529  139657 command_runner.go:130] > # List of additional devices. specified as
	I1213 13:16:24.441544  139657 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 13:16:24.441554  139657 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 13:16:24.441563  139657 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 13:16:24.441577  139657 command_runner.go:130] > # additional_devices = [
	I1213 13:16:24.441583  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441592  139657 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 13:16:24.441599  139657 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 13:16:24.441606  139657 command_runner.go:130] > # 	"/etc/cdi",
	I1213 13:16:24.441615  139657 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 13:16:24.441620  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441631  139657 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 13:16:24.441644  139657 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 13:16:24.441653  139657 command_runner.go:130] > # Defaults to false.
	I1213 13:16:24.441661  139657 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 13:16:24.441674  139657 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 13:16:24.441685  139657 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 13:16:24.441694  139657 command_runner.go:130] > # hooks_dir = [
	I1213 13:16:24.441700  139657 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 13:16:24.441707  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441719  139657 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 13:16:24.441739  139657 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 13:16:24.441751  139657 command_runner.go:130] > # its default mounts from the following two files:
	I1213 13:16:24.441757  139657 command_runner.go:130] > #
	I1213 13:16:24.441770  139657 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 13:16:24.441780  139657 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 13:16:24.441791  139657 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 13:16:24.441797  139657 command_runner.go:130] > #
	I1213 13:16:24.441809  139657 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 13:16:24.441819  139657 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 13:16:24.441832  139657 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 13:16:24.441841  139657 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 13:16:24.441849  139657 command_runner.go:130] > #
	I1213 13:16:24.441856  139657 command_runner.go:130] > # default_mounts_file = ""
	I1213 13:16:24.441866  139657 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 13:16:24.441877  139657 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 13:16:24.441886  139657 command_runner.go:130] > pids_limit = 1024
	I1213 13:16:24.441896  139657 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 13:16:24.441906  139657 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 13:16:24.441917  139657 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 13:16:24.441931  139657 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 13:16:24.441941  139657 command_runner.go:130] > # log_size_max = -1
	I1213 13:16:24.441953  139657 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 13:16:24.441963  139657 command_runner.go:130] > # log_to_journald = false
	I1213 13:16:24.441977  139657 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 13:16:24.441987  139657 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 13:16:24.441995  139657 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 13:16:24.442006  139657 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 13:16:24.442015  139657 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 13:16:24.442024  139657 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 13:16:24.442034  139657 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 13:16:24.442042  139657 command_runner.go:130] > # read_only = false
	I1213 13:16:24.442052  139657 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 13:16:24.442065  139657 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 13:16:24.442093  139657 command_runner.go:130] > # live configuration reload.
	I1213 13:16:24.442101  139657 command_runner.go:130] > # log_level = "info"
	I1213 13:16:24.442120  139657 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 13:16:24.442131  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.442139  139657 command_runner.go:130] > # log_filter = ""
	I1213 13:16:24.442149  139657 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 13:16:24.442163  139657 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 13:16:24.442172  139657 command_runner.go:130] > # separated by comma.
	I1213 13:16:24.442185  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442194  139657 command_runner.go:130] > # uid_mappings = ""
	I1213 13:16:24.442205  139657 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 13:16:24.442218  139657 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 13:16:24.442227  139657 command_runner.go:130] > # separated by comma.
	I1213 13:16:24.442244  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442254  139657 command_runner.go:130] > # gid_mappings = ""
	I1213 13:16:24.442264  139657 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 13:16:24.442277  139657 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 13:16:24.442289  139657 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 13:16:24.442302  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442310  139657 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 13:16:24.442320  139657 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 13:16:24.442333  139657 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 13:16:24.442344  139657 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 13:16:24.442357  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442364  139657 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 13:16:24.442373  139657 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 13:16:24.442391  139657 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 13:16:24.442402  139657 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 13:16:24.442409  139657 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 13:16:24.442419  139657 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 13:16:24.442430  139657 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 13:16:24.442441  139657 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 13:16:24.442450  139657 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 13:16:24.442467  139657 command_runner.go:130] > drop_infra_ctr = false
	I1213 13:16:24.442479  139657 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 13:16:24.442489  139657 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 13:16:24.442503  139657 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 13:16:24.442510  139657 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 13:16:24.442523  139657 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 13:16:24.442534  139657 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 13:16:24.442546  139657 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 13:16:24.442554  139657 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 13:16:24.442563  139657 command_runner.go:130] > # shared_cpuset = ""
	I1213 13:16:24.442572  139657 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 13:16:24.442581  139657 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 13:16:24.442589  139657 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 13:16:24.442601  139657 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 13:16:24.442608  139657 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1213 13:16:24.442618  139657 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 13:16:24.442631  139657 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 13:16:24.442640  139657 command_runner.go:130] > # enable_criu_support = false
	I1213 13:16:24.442650  139657 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 13:16:24.442660  139657 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 13:16:24.442667  139657 command_runner.go:130] > # enable_pod_events = false
	I1213 13:16:24.442677  139657 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 13:16:24.442688  139657 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 13:16:24.442699  139657 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 13:16:24.442706  139657 command_runner.go:130] > # default_runtime = "runc"
	I1213 13:16:24.442715  139657 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 13:16:24.442726  139657 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 13:16:24.442741  139657 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 13:16:24.442756  139657 command_runner.go:130] > # creation as a file is not desired either.
	I1213 13:16:24.442774  139657 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 13:16:24.442784  139657 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 13:16:24.442792  139657 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 13:16:24.442797  139657 command_runner.go:130] > # ]
	I1213 13:16:24.442815  139657 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 13:16:24.442828  139657 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 13:16:24.442840  139657 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 13:16:24.442851  139657 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 13:16:24.442856  139657 command_runner.go:130] > #
	I1213 13:16:24.442865  139657 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 13:16:24.442873  139657 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 13:16:24.442881  139657 command_runner.go:130] > # runtime_type = "oci"
	I1213 13:16:24.442949  139657 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 13:16:24.442960  139657 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 13:16:24.442967  139657 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 13:16:24.442973  139657 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 13:16:24.442978  139657 command_runner.go:130] > # monitor_env = []
	I1213 13:16:24.442986  139657 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 13:16:24.442993  139657 command_runner.go:130] > # allowed_annotations = []
	I1213 13:16:24.443003  139657 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 13:16:24.443012  139657 command_runner.go:130] > # Where:
	I1213 13:16:24.443020  139657 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 13:16:24.443031  139657 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 13:16:24.443049  139657 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 13:16:24.443061  139657 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 13:16:24.443080  139657 command_runner.go:130] > #   in $PATH.
	I1213 13:16:24.443104  139657 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 13:16:24.443121  139657 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 13:16:24.443132  139657 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 13:16:24.443140  139657 command_runner.go:130] > #   state.
	I1213 13:16:24.443151  139657 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 13:16:24.443162  139657 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 13:16:24.443173  139657 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 13:16:24.443185  139657 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 13:16:24.443195  139657 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 13:16:24.443209  139657 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 13:16:24.443220  139657 command_runner.go:130] > #   The currently recognized values are:
	I1213 13:16:24.443242  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 13:16:24.443258  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 13:16:24.443270  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 13:16:24.443280  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 13:16:24.443293  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 13:16:24.443305  139657 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 13:16:24.443319  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 13:16:24.443332  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 13:16:24.443342  139657 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 13:16:24.443354  139657 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 13:16:24.443362  139657 command_runner.go:130] > #   deprecated option "conmon".
	I1213 13:16:24.443374  139657 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 13:16:24.443385  139657 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 13:16:24.443397  139657 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 13:16:24.443407  139657 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 13:16:24.443418  139657 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1213 13:16:24.443429  139657 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 13:16:24.443440  139657 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 13:16:24.443452  139657 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 13:16:24.443457  139657 command_runner.go:130] > #
	I1213 13:16:24.443467  139657 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 13:16:24.443473  139657 command_runner.go:130] > #
	I1213 13:16:24.443482  139657 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 13:16:24.443496  139657 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 13:16:24.443504  139657 command_runner.go:130] > #
	I1213 13:16:24.443514  139657 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 13:16:24.443525  139657 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 13:16:24.443533  139657 command_runner.go:130] > #
	I1213 13:16:24.443544  139657 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 13:16:24.443550  139657 command_runner.go:130] > # feature.
	I1213 13:16:24.443555  139657 command_runner.go:130] > #
	I1213 13:16:24.443567  139657 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 13:16:24.443577  139657 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 13:16:24.443598  139657 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 13:16:24.443613  139657 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 13:16:24.443628  139657 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 13:16:24.443636  139657 command_runner.go:130] > #
	I1213 13:16:24.443646  139657 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 13:16:24.443659  139657 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 13:16:24.443667  139657 command_runner.go:130] > #
	I1213 13:16:24.443676  139657 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 13:16:24.443688  139657 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 13:16:24.443694  139657 command_runner.go:130] > #
	I1213 13:16:24.443705  139657 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 13:16:24.443718  139657 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 13:16:24.443725  139657 command_runner.go:130] > # limitation.
	I1213 13:16:24.443734  139657 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 13:16:24.443740  139657 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1213 13:16:24.443747  139657 command_runner.go:130] > runtime_type = "oci"
	I1213 13:16:24.443755  139657 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 13:16:24.443766  139657 command_runner.go:130] > runtime_config_path = ""
	I1213 13:16:24.443773  139657 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1213 13:16:24.443779  139657 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 13:16:24.443786  139657 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 13:16:24.443792  139657 command_runner.go:130] > monitor_env = [
	I1213 13:16:24.443802  139657 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1213 13:16:24.443810  139657 command_runner.go:130] > ]
	I1213 13:16:24.443818  139657 command_runner.go:130] > privileged_without_host_devices = false
	I1213 13:16:24.443830  139657 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 13:16:24.443839  139657 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 13:16:24.443849  139657 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 13:16:24.443863  139657 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 13:16:24.443876  139657 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1213 13:16:24.443887  139657 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 13:16:24.443903  139657 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 13:16:24.443918  139657 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 13:16:24.443936  139657 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 13:16:24.443950  139657 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 13:16:24.443956  139657 command_runner.go:130] > # Example:
	I1213 13:16:24.443964  139657 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 13:16:24.443971  139657 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 13:16:24.443984  139657 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 13:16:24.443994  139657 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 13:16:24.444004  139657 command_runner.go:130] > # cpuset = 0
	I1213 13:16:24.444013  139657 command_runner.go:130] > # cpushares = "0-1"
	I1213 13:16:24.444019  139657 command_runner.go:130] > # Where:
	I1213 13:16:24.444027  139657 command_runner.go:130] > # The workload name is workload-type.
	I1213 13:16:24.444038  139657 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 13:16:24.444050  139657 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 13:16:24.444060  139657 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 13:16:24.444086  139657 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 13:16:24.444097  139657 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 13:16:24.444112  139657 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 13:16:24.444127  139657 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 13:16:24.444136  139657 command_runner.go:130] > # Default value is set to true
	I1213 13:16:24.444143  139657 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 13:16:24.444152  139657 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 13:16:24.444162  139657 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 13:16:24.444170  139657 command_runner.go:130] > # Default value is set to 'false'
	I1213 13:16:24.444179  139657 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 13:16:24.444194  139657 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 13:16:24.444202  139657 command_runner.go:130] > #
	I1213 13:16:24.444212  139657 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 13:16:24.444227  139657 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1213 13:16:24.444240  139657 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1213 13:16:24.444250  139657 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1213 13:16:24.444260  139657 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1213 13:16:24.444277  139657 command_runner.go:130] > [crio.image]
	I1213 13:16:24.444290  139657 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 13:16:24.444308  139657 command_runner.go:130] > # default_transport = "docker://"
	I1213 13:16:24.444322  139657 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 13:16:24.444336  139657 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 13:16:24.444346  139657 command_runner.go:130] > # global_auth_file = ""
	I1213 13:16:24.444357  139657 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 13:16:24.444366  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.444377  139657 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.444388  139657 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 13:16:24.444401  139657 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 13:16:24.444411  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.444418  139657 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 13:16:24.444432  139657 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 13:16:24.444443  139657 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 13:16:24.444456  139657 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 13:16:24.444465  139657 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 13:16:24.444475  139657 command_runner.go:130] > # pause_command = "/pause"
	I1213 13:16:24.444485  139657 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 13:16:24.444498  139657 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 13:16:24.444510  139657 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 13:16:24.444522  139657 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 13:16:24.444533  139657 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 13:16:24.444547  139657 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 13:16:24.444555  139657 command_runner.go:130] > # pinned_images = [
	I1213 13:16:24.444560  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444570  139657 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 13:16:24.444583  139657 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 13:16:24.444593  139657 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 13:16:24.444612  139657 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 13:16:24.444624  139657 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 13:16:24.444632  139657 command_runner.go:130] > # signature_policy = ""
	I1213 13:16:24.444644  139657 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 13:16:24.444655  139657 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 13:16:24.444668  139657 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 13:16:24.444686  139657 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 13:16:24.444698  139657 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 13:16:24.444707  139657 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 13:16:24.444717  139657 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 13:16:24.444730  139657 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 13:16:24.444737  139657 command_runner.go:130] > # changing them here.
	I1213 13:16:24.444744  139657 command_runner.go:130] > # insecure_registries = [
	I1213 13:16:24.444749  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444762  139657 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 13:16:24.444771  139657 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 13:16:24.444780  139657 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 13:16:24.444788  139657 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 13:16:24.444796  139657 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 13:16:24.444807  139657 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 13:16:24.444818  139657 command_runner.go:130] > # CNI plugins.
	I1213 13:16:24.444827  139657 command_runner.go:130] > [crio.network]
	I1213 13:16:24.444837  139657 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 13:16:24.444847  139657 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 13:16:24.444854  139657 command_runner.go:130] > # cni_default_network = ""
	I1213 13:16:24.444863  139657 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 13:16:24.444871  139657 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 13:16:24.444880  139657 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 13:16:24.444887  139657 command_runner.go:130] > # plugin_dirs = [
	I1213 13:16:24.444894  139657 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 13:16:24.444898  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444913  139657 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 13:16:24.444923  139657 command_runner.go:130] > [crio.metrics]
	I1213 13:16:24.444931  139657 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 13:16:24.444941  139657 command_runner.go:130] > enable_metrics = true
	I1213 13:16:24.444949  139657 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 13:16:24.444959  139657 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 13:16:24.444971  139657 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 13:16:24.444984  139657 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 13:16:24.445004  139657 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 13:16:24.445013  139657 command_runner.go:130] > # metrics_collectors = [
	I1213 13:16:24.445020  139657 command_runner.go:130] > # 	"operations",
	I1213 13:16:24.445031  139657 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1213 13:16:24.445038  139657 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1213 13:16:24.445045  139657 command_runner.go:130] > # 	"operations_errors",
	I1213 13:16:24.445052  139657 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1213 13:16:24.445060  139657 command_runner.go:130] > # 	"image_pulls_by_name",
	I1213 13:16:24.445068  139657 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1213 13:16:24.445085  139657 command_runner.go:130] > # 	"image_pulls_failures",
	I1213 13:16:24.445092  139657 command_runner.go:130] > # 	"image_pulls_successes",
	I1213 13:16:24.445099  139657 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 13:16:24.445110  139657 command_runner.go:130] > # 	"image_layer_reuse",
	I1213 13:16:24.445121  139657 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 13:16:24.445128  139657 command_runner.go:130] > # 	"containers_oom_total",
	I1213 13:16:24.445134  139657 command_runner.go:130] > # 	"containers_oom",
	I1213 13:16:24.445141  139657 command_runner.go:130] > # 	"processes_defunct",
	I1213 13:16:24.445147  139657 command_runner.go:130] > # 	"operations_total",
	I1213 13:16:24.445155  139657 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 13:16:24.445163  139657 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 13:16:24.445170  139657 command_runner.go:130] > # 	"operations_errors_total",
	I1213 13:16:24.445178  139657 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 13:16:24.445186  139657 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 13:16:24.445194  139657 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 13:16:24.445202  139657 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 13:16:24.445210  139657 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 13:16:24.445218  139657 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 13:16:24.445231  139657 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 13:16:24.445238  139657 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 13:16:24.445244  139657 command_runner.go:130] > # ]
	I1213 13:16:24.445253  139657 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 13:16:24.445259  139657 command_runner.go:130] > # metrics_port = 9090
	I1213 13:16:24.445268  139657 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 13:16:24.445284  139657 command_runner.go:130] > # metrics_socket = ""
	I1213 13:16:24.445295  139657 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 13:16:24.445306  139657 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 13:16:24.445319  139657 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 13:16:24.445328  139657 command_runner.go:130] > # certificate on any modification event.
	I1213 13:16:24.445335  139657 command_runner.go:130] > # metrics_cert = ""
	I1213 13:16:24.445344  139657 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 13:16:24.445355  139657 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 13:16:24.445360  139657 command_runner.go:130] > # metrics_key = ""
	I1213 13:16:24.445370  139657 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 13:16:24.445379  139657 command_runner.go:130] > [crio.tracing]
	I1213 13:16:24.445387  139657 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 13:16:24.445394  139657 command_runner.go:130] > # enable_tracing = false
	I1213 13:16:24.445403  139657 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 13:16:24.445413  139657 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1213 13:16:24.445424  139657 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 13:16:24.445435  139657 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 13:16:24.445444  139657 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 13:16:24.445450  139657 command_runner.go:130] > [crio.nri]
	I1213 13:16:24.445457  139657 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 13:16:24.445465  139657 command_runner.go:130] > # enable_nri = false
	I1213 13:16:24.445471  139657 command_runner.go:130] > # NRI socket to listen on.
	I1213 13:16:24.445479  139657 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 13:16:24.445490  139657 command_runner.go:130] > # NRI plugin directory to use.
	I1213 13:16:24.445498  139657 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 13:16:24.445509  139657 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 13:16:24.445518  139657 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 13:16:24.445528  139657 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 13:16:24.445539  139657 command_runner.go:130] > # nri_disable_connections = false
	I1213 13:16:24.445548  139657 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 13:16:24.445556  139657 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 13:16:24.445564  139657 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 13:16:24.445572  139657 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 13:16:24.445606  139657 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 13:16:24.445616  139657 command_runner.go:130] > [crio.stats]
	I1213 13:16:24.445625  139657 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 13:16:24.445640  139657 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 13:16:24.445648  139657 command_runner.go:130] > # stats_collection_period = 0
	I1213 13:16:24.445769  139657 cni.go:84] Creating CNI manager for ""
	I1213 13:16:24.445787  139657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:16:24.445812  139657 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:16:24.445847  139657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.124 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101171 NodeName:functional-101171 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:16:24.446054  139657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.124
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-101171"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.124"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.124"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:16:24.446191  139657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:16:24.458394  139657 command_runner.go:130] > kubeadm
	I1213 13:16:24.458424  139657 command_runner.go:130] > kubectl
	I1213 13:16:24.458446  139657 command_runner.go:130] > kubelet
	I1213 13:16:24.458789  139657 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:16:24.458853  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:16:24.471347  139657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1213 13:16:24.493805  139657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:16:24.515984  139657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1213 13:16:24.538444  139657 ssh_runner.go:195] Run: grep 192.168.39.124	control-plane.minikube.internal$ /etc/hosts
	I1213 13:16:24.543369  139657 command_runner.go:130] > 192.168.39.124	control-plane.minikube.internal
	I1213 13:16:24.543465  139657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:16:24.727714  139657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:16:24.748340  139657 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171 for IP: 192.168.39.124
	I1213 13:16:24.748371  139657 certs.go:195] generating shared ca certs ...
	I1213 13:16:24.748391  139657 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:16:24.748616  139657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
	I1213 13:16:24.748684  139657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
	I1213 13:16:24.748697  139657 certs.go:257] generating profile certs ...
	I1213 13:16:24.748799  139657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/client.key
	I1213 13:16:24.748886  139657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key.194f038f
	I1213 13:16:24.748927  139657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key
	I1213 13:16:24.748940  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 13:16:24.748961  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 13:16:24.748976  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 13:16:24.748999  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 13:16:24.749016  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 13:16:24.749031  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 13:16:24.749046  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 13:16:24.749066  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 13:16:24.749158  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem (1338 bytes)
	W1213 13:16:24.749196  139657 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234_empty.pem, impossibly tiny 0 bytes
	I1213 13:16:24.749208  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:16:24.749236  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:16:24.749267  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:16:24.749300  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
	I1213 13:16:24.749360  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 13:16:24.749402  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:24.749419  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem -> /usr/share/ca-certificates/135234.pem
	I1213 13:16:24.749434  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> /usr/share/ca-certificates/1352342.pem
	I1213 13:16:24.750215  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:16:24.784325  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 13:16:24.817785  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:16:24.853144  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 13:16:24.890536  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:16:24.926567  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:16:24.962010  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:16:24.998369  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:16:25.032230  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:16:25.068964  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem --> /usr/share/ca-certificates/135234.pem (1338 bytes)
	I1213 13:16:25.102766  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /usr/share/ca-certificates/1352342.pem (1708 bytes)
	I1213 13:16:25.136252  139657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:16:25.160868  139657 ssh_runner.go:195] Run: openssl version
	I1213 13:16:25.169220  139657 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1213 13:16:25.169344  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.182662  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1352342.pem /etc/ssl/certs/1352342.pem
	I1213 13:16:25.196346  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202552  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202645  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202700  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.211067  139657 command_runner.go:130] > 3ec20f2e
	I1213 13:16:25.211253  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:16:25.224328  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.238368  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:16:25.252003  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258273  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258311  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258360  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.266989  139657 command_runner.go:130] > b5213941
	I1213 13:16:25.267145  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:16:25.280410  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.293801  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135234.pem /etc/ssl/certs/135234.pem
	I1213 13:16:25.308024  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.313993  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.314032  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.314112  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.322512  139657 command_runner.go:130] > 51391683
	I1213 13:16:25.322716  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:16:25.335714  139657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:16:25.341584  139657 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:16:25.341629  139657 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 13:16:25.341635  139657 command_runner.go:130] > Device: 253,1	Inode: 7338073     Links: 1
	I1213 13:16:25.341641  139657 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 13:16:25.341647  139657 command_runner.go:130] > Access: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341652  139657 command_runner.go:130] > Modify: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341657  139657 command_runner.go:130] > Change: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341662  139657 command_runner.go:130] >  Birth: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341740  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:16:25.350002  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.350186  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:16:25.358329  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.358448  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:16:25.366344  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.366481  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:16:25.374941  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.375017  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:16:25.383466  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.383560  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:16:25.391728  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.391825  139657 kubeadm.go:401] StartCluster: {Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:16:25.391949  139657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:16:25.392028  139657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:16:25.432281  139657 command_runner.go:130] > f92a3a092485a0ac1dc51a2bc6f50ba873a8493104faa8027f92b47afffd326c
	I1213 13:16:25.432316  139657 command_runner.go:130] > 0f7e3e7bcf1b4fc58d523ea0a6b71f4b7f6159f908472192dec50c5c4773a6c8
	I1213 13:16:25.432327  139657 command_runner.go:130] > 82d65eddb23627c8c7b03d97ee25384a4641be44a8ce176195431ba631e420a4
	I1213 13:16:25.432337  139657 command_runner.go:130] > c035d6ae568bcf65e2e0e0ac9c8e33c9683cfa5e9962808be5bc1d7e90560b68
	I1213 13:16:25.432345  139657 command_runner.go:130] > f2e4d14cfaaeb50496758e8c7af82df0842b56679f7302760beb406f1d2377b0
	I1213 13:16:25.432364  139657 command_runner.go:130] > 5c5106dd6f44ef172d73e559df759af56aae17be00dbd7bda168113b5c87103e
	I1213 13:16:25.432372  139657 command_runner.go:130] > 9098da3bf6a16aa5aca362d77b4eefdf3d8740ee47058bac1f57462956a0ec41
	I1213 13:16:25.432382  139657 command_runner.go:130] > 032a755151e3edddee963cde3642ebab28ccd3cad4f977f5abe9be2793036fd5
	I1213 13:16:25.432392  139657 command_runner.go:130] > f8b0288ee3d2f686e17cab2f0126717e4773c0a011bf820a99b08c7146415889
	I1213 13:16:25.432405  139657 command_runner.go:130] > cb7606d3b6d8f2b73f95595faf6894b2622d71cebaf6f7aa31ae8cac07f16b57
	I1213 13:16:25.432417  139657 command_runner.go:130] > f02d47f5908b9925ba08e11c9c86ffc993d978b0210bc885a88444e31b6a2a63
	I1213 13:16:25.432448  139657 cri.go:89] found id: "f92a3a092485a0ac1dc51a2bc6f50ba873a8493104faa8027f92b47afffd326c"
	I1213 13:16:25.432463  139657 cri.go:89] found id: "0f7e3e7bcf1b4fc58d523ea0a6b71f4b7f6159f908472192dec50c5c4773a6c8"
	I1213 13:16:25.432471  139657 cri.go:89] found id: "82d65eddb23627c8c7b03d97ee25384a4641be44a8ce176195431ba631e420a4"
	I1213 13:16:25.432481  139657 cri.go:89] found id: "c035d6ae568bcf65e2e0e0ac9c8e33c9683cfa5e9962808be5bc1d7e90560b68"
	I1213 13:16:25.432487  139657 cri.go:89] found id: "f2e4d14cfaaeb50496758e8c7af82df0842b56679f7302760beb406f1d2377b0"
	I1213 13:16:25.432495  139657 cri.go:89] found id: "5c5106dd6f44ef172d73e559df759af56aae17be00dbd7bda168113b5c87103e"
	I1213 13:16:25.432501  139657 cri.go:89] found id: "9098da3bf6a16aa5aca362d77b4eefdf3d8740ee47058bac1f57462956a0ec41"
	I1213 13:16:25.432510  139657 cri.go:89] found id: "032a755151e3edddee963cde3642ebab28ccd3cad4f977f5abe9be2793036fd5"
	I1213 13:16:25.432516  139657 cri.go:89] found id: "f8b0288ee3d2f686e17cab2f0126717e4773c0a011bf820a99b08c7146415889"
	I1213 13:16:25.432528  139657 cri.go:89] found id: "cb7606d3b6d8f2b73f95595faf6894b2622d71cebaf6f7aa31ae8cac07f16b57"
	I1213 13:16:25.432537  139657 cri.go:89] found id: "f02d47f5908b9925ba08e11c9c86ffc993d978b0210bc885a88444e31b6a2a63"
	I1213 13:16:25.432544  139657 cri.go:89] found id: ""
	I1213 13:16:25.432611  139657 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-101171 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 13m55.263422548s for "functional-101171" cluster.
I1213 13:28:40.086602  135234 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-101171 -n functional-101171
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-101171 -n functional-101171: exit status 2 (195.755749ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 logs -n 25
E1213 13:29:35.228604  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:33:12.162969  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:38:12.163395  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-101171 logs -n 25: (10m37.735127293s)
helpers_test.go:261: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ addons-685870 addons disable cloud-spanner --alsologtostderr -v=1                                                     │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	│ ip      │ addons-685870 ip                                                                                                      │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:11 UTC │
	│ addons  │ addons-685870 addons disable ingress-dns --alsologtostderr -v=1                                                       │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:11 UTC │
	│ addons  │ addons-685870 addons disable ingress --alsologtostderr -v=1                                                           │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:11 UTC │
	│ stop    │ -p addons-685870                                                                                                      │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:12 UTC │
	│ addons  │ enable dashboard -p addons-685870                                                                                     │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:12 UTC │
	│ addons  │ disable dashboard -p addons-685870                                                                                    │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:12 UTC │
	│ addons  │ disable gvisor -p addons-685870                                                                                       │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:12 UTC │
	│ delete  │ -p addons-685870                                                                                                      │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:12 UTC │
	│ start   │ -p nospam-339903 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-339903 --driver=kvm2  --container-runtime=crio │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:13 UTC │
	│ start   │ nospam-339903 --log_dir /tmp/nospam-339903 start --dry-run                                                            │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │                     │
	│ start   │ nospam-339903 --log_dir /tmp/nospam-339903 start --dry-run                                                            │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │                     │
	│ start   │ nospam-339903 --log_dir /tmp/nospam-339903 start --dry-run                                                            │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │                     │
	│ pause   │ nospam-339903 --log_dir /tmp/nospam-339903 pause                                                                      │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ pause   │ nospam-339903 --log_dir /tmp/nospam-339903 pause                                                                      │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ pause   │ nospam-339903 --log_dir /tmp/nospam-339903 pause                                                                      │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ unpause │ nospam-339903 --log_dir /tmp/nospam-339903 unpause                                                                    │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ unpause │ nospam-339903 --log_dir /tmp/nospam-339903 unpause                                                                    │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ unpause │ nospam-339903 --log_dir /tmp/nospam-339903 unpause                                                                    │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop    │ nospam-339903 --log_dir /tmp/nospam-339903 stop                                                                       │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop    │ nospam-339903 --log_dir /tmp/nospam-339903 stop                                                                       │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop    │ nospam-339903 --log_dir /tmp/nospam-339903 stop                                                                       │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ delete  │ -p nospam-339903                                                                                                      │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ start   │ -p functional-101171 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio           │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:14 UTC │
	│ start   │ -p functional-101171 --alsologtostderr -v=8                                                                           │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:14:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:14:44.880702  139657 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:14:44.880839  139657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:14:44.880850  139657 out.go:374] Setting ErrFile to fd 2...
	I1213 13:14:44.880858  139657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:14:44.881087  139657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 13:14:44.881551  139657 out.go:368] Setting JSON to false
	I1213 13:14:44.882447  139657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3425,"bootTime":1765628260,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:14:44.882501  139657 start.go:143] virtualization: kvm guest
	I1213 13:14:44.884268  139657 out.go:179] * [functional-101171] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:14:44.885270  139657 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:14:44.885307  139657 notify.go:221] Checking for updates...
	I1213 13:14:44.887088  139657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:14:44.888140  139657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 13:14:44.889099  139657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:14:44.890102  139657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:14:44.891038  139657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:14:44.892542  139657 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:14:44.892673  139657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:14:44.927435  139657 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 13:14:44.928372  139657 start.go:309] selected driver: kvm2
	I1213 13:14:44.928386  139657 start.go:927] validating driver "kvm2" against &{Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:14:44.928499  139657 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:14:44.929402  139657 cni.go:84] Creating CNI manager for ""
	I1213 13:14:44.929464  139657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:14:44.929513  139657 start.go:353] cluster config:
	{Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:14:44.929611  139657 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:14:44.930834  139657 out.go:179] * Starting "functional-101171" primary control-plane node in "functional-101171" cluster
	I1213 13:14:44.931691  139657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:14:44.931725  139657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:14:44.931737  139657 cache.go:65] Caching tarball of preloaded images
	I1213 13:14:44.931865  139657 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:14:44.931879  139657 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:14:44.931980  139657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/config.json ...
	I1213 13:14:44.932230  139657 start.go:360] acquireMachinesLock for functional-101171: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 13:14:44.932293  139657 start.go:364] duration metric: took 38.36µs to acquireMachinesLock for "functional-101171"
	I1213 13:14:44.932313  139657 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:14:44.932324  139657 fix.go:54] fixHost starting: 
	I1213 13:14:44.933932  139657 fix.go:112] recreateIfNeeded on functional-101171: state=Running err=<nil>
	W1213 13:14:44.933963  139657 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:14:44.935205  139657 out.go:252] * Updating the running kvm2 "functional-101171" VM ...
	I1213 13:14:44.935228  139657 machine.go:94] provisionDockerMachine start ...
	I1213 13:14:44.937452  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:44.937806  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:44.937835  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:44.938001  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:44.938338  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:44.938355  139657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:14:45.046797  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101171
	
	I1213 13:14:45.046826  139657 buildroot.go:166] provisioning hostname "functional-101171"
	I1213 13:14:45.049877  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.050321  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.050355  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.050541  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.050782  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.050798  139657 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101171 && echo "functional-101171" | sudo tee /etc/hostname
	I1213 13:14:45.172748  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101171
	
	I1213 13:14:45.175509  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.175971  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.176008  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.176182  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.176385  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.176400  139657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101171' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101171/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101171' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:14:45.281039  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:14:45.281099  139657 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
	I1213 13:14:45.281128  139657 buildroot.go:174] setting up certificates
	I1213 13:14:45.281147  139657 provision.go:84] configureAuth start
	I1213 13:14:45.283949  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.284380  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.284418  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.286705  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.287058  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.287116  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.287256  139657 provision.go:143] copyHostCerts
	I1213 13:14:45.287299  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 13:14:45.287346  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem, removing ...
	I1213 13:14:45.287365  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 13:14:45.287454  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
	I1213 13:14:45.287580  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 13:14:45.287614  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem, removing ...
	I1213 13:14:45.287625  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 13:14:45.287672  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
	I1213 13:14:45.287766  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 13:14:45.287791  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem, removing ...
	I1213 13:14:45.287797  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 13:14:45.287842  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
	I1213 13:14:45.287926  139657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.functional-101171 san=[127.0.0.1 192.168.39.124 functional-101171 localhost minikube]
	I1213 13:14:45.423318  139657 provision.go:177] copyRemoteCerts
	I1213 13:14:45.423403  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:14:45.425911  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.426340  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.426370  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.426502  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:45.512848  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 13:14:45.512952  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:14:45.542724  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 13:14:45.542812  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:14:45.571677  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 13:14:45.571772  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:14:45.601284  139657 provision.go:87] duration metric: took 320.120369ms to configureAuth
	I1213 13:14:45.601314  139657 buildroot.go:189] setting minikube options for container-runtime
	I1213 13:14:45.601491  139657 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:14:45.604379  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.604741  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.604764  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.604932  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.605181  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.605200  139657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:14:51.168422  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:14:51.168457  139657 machine.go:97] duration metric: took 6.233220346s to provisionDockerMachine
	I1213 13:14:51.168486  139657 start.go:293] postStartSetup for "functional-101171" (driver="kvm2")
	I1213 13:14:51.168502  139657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:14:51.168611  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:14:51.171649  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.172012  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.172099  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.172264  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.256552  139657 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:14:51.261415  139657 command_runner.go:130] > NAME=Buildroot
	I1213 13:14:51.261442  139657 command_runner.go:130] > VERSION=2025.02-dirty
	I1213 13:14:51.261446  139657 command_runner.go:130] > ID=buildroot
	I1213 13:14:51.261450  139657 command_runner.go:130] > VERSION_ID=2025.02
	I1213 13:14:51.261455  139657 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1213 13:14:51.261540  139657 info.go:137] Remote host: Buildroot 2025.02
	I1213 13:14:51.261567  139657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
	I1213 13:14:51.261651  139657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
	I1213 13:14:51.261758  139657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> 1352342.pem in /etc/ssl/certs
	I1213 13:14:51.261772  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> /etc/ssl/certs/1352342.pem
	I1213 13:14:51.261876  139657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts -> hosts in /etc/test/nested/copy/135234
	I1213 13:14:51.261886  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts -> /etc/test/nested/copy/135234/hosts
	I1213 13:14:51.261944  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/135234
	I1213 13:14:51.275404  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 13:14:51.304392  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts --> /etc/test/nested/copy/135234/hosts (40 bytes)
	I1213 13:14:51.390782  139657 start.go:296] duration metric: took 222.277729ms for postStartSetup
	I1213 13:14:51.390831  139657 fix.go:56] duration metric: took 6.458506569s for fixHost
	I1213 13:14:51.394087  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.394507  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.394539  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.394733  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:51.395032  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:51.395048  139657 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 13:14:51.547616  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765631691.540521728
	
	I1213 13:14:51.547640  139657 fix.go:216] guest clock: 1765631691.540521728
	I1213 13:14:51.547663  139657 fix.go:229] Guest: 2025-12-13 13:14:51.540521728 +0000 UTC Remote: 2025-12-13 13:14:51.390838299 +0000 UTC m=+6.561594252 (delta=149.683429ms)
	I1213 13:14:51.547685  139657 fix.go:200] guest clock delta is within tolerance: 149.683429ms
	I1213 13:14:51.547691  139657 start.go:83] releasing machines lock for "functional-101171", held for 6.615387027s
	I1213 13:14:51.550620  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.551093  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.551134  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.551858  139657 ssh_runner.go:195] Run: cat /version.json
	I1213 13:14:51.551895  139657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:14:51.555225  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555396  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555679  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.555709  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555901  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.555915  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.555948  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.556188  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.711392  139657 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 13:14:51.711480  139657 command_runner.go:130] > {"iso_version": "v1.37.0-1765613186-22122", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "89f69959280ebeefd164cfeba1f5b84c6f004bc9"}
	I1213 13:14:51.711625  139657 ssh_runner.go:195] Run: systemctl --version
	I1213 13:14:51.721211  139657 command_runner.go:130] > systemd 256 (256.7)
	I1213 13:14:51.721261  139657 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1213 13:14:51.721342  139657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:14:51.928878  139657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 13:14:51.943312  139657 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 13:14:51.943381  139657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:14:51.943457  139657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:14:51.961133  139657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:14:51.961160  139657 start.go:496] detecting cgroup driver to use...
	I1213 13:14:51.961234  139657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:14:52.008684  139657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:14:52.058685  139657 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:14:52.058767  139657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:14:52.099652  139657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:14:52.129214  139657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:14:52.454020  139657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:14:52.731152  139657 docker.go:234] disabling docker service ...
	I1213 13:14:52.731233  139657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:14:52.789926  139657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:14:52.807635  139657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:14:53.089730  139657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:14:53.328299  139657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:14:53.351747  139657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:14:53.384802  139657 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 13:14:53.384876  139657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:14:53.385004  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.402675  139657 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 13:14:53.402773  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.425941  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.444350  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.459025  139657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:14:53.488518  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.515384  139657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.531334  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.545103  139657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:14:53.555838  139657 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 13:14:53.556273  139657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:14:53.567831  139657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:14:53.751704  139657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:16:24.195369  139657 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.443610327s)
	I1213 13:16:24.195422  139657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:16:24.195496  139657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:16:24.201208  139657 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 13:16:24.201250  139657 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 13:16:24.201260  139657 command_runner.go:130] > Device: 0,23	Inode: 1994        Links: 1
	I1213 13:16:24.201270  139657 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 13:16:24.201277  139657 command_runner.go:130] > Access: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201287  139657 command_runner.go:130] > Modify: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201293  139657 command_runner.go:130] > Change: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201298  139657 command_runner.go:130] >  Birth: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201336  139657 start.go:564] Will wait 60s for crictl version
	I1213 13:16:24.201389  139657 ssh_runner.go:195] Run: which crictl
	I1213 13:16:24.205825  139657 command_runner.go:130] > /usr/bin/crictl
	I1213 13:16:24.205969  139657 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 13:16:24.240544  139657 command_runner.go:130] > Version:  0.1.0
	I1213 13:16:24.240566  139657 command_runner.go:130] > RuntimeName:  cri-o
	I1213 13:16:24.240571  139657 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1213 13:16:24.240576  139657 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 13:16:24.240600  139657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 13:16:24.240739  139657 ssh_runner.go:195] Run: crio --version
	I1213 13:16:24.274046  139657 command_runner.go:130] > crio version 1.29.1
	I1213 13:16:24.274084  139657 command_runner.go:130] > Version:        1.29.1
	I1213 13:16:24.274090  139657 command_runner.go:130] > GitCommit:      unknown
	I1213 13:16:24.274094  139657 command_runner.go:130] > GitCommitDate:  unknown
	I1213 13:16:24.274098  139657 command_runner.go:130] > GitTreeState:   clean
	I1213 13:16:24.274104  139657 command_runner.go:130] > BuildDate:      2025-12-13T11:21:09Z
	I1213 13:16:24.274108  139657 command_runner.go:130] > GoVersion:      go1.25.5
	I1213 13:16:24.274112  139657 command_runner.go:130] > Compiler:       gc
	I1213 13:16:24.274115  139657 command_runner.go:130] > Platform:       linux/amd64
	I1213 13:16:24.274119  139657 command_runner.go:130] > Linkmode:       dynamic
	I1213 13:16:24.274126  139657 command_runner.go:130] > BuildTags:      
	I1213 13:16:24.274131  139657 command_runner.go:130] >   containers_image_ostree_stub
	I1213 13:16:24.274135  139657 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1213 13:16:24.274138  139657 command_runner.go:130] >   btrfs_noversion
	I1213 13:16:24.274143  139657 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1213 13:16:24.274150  139657 command_runner.go:130] >   libdm_no_deferred_remove
	I1213 13:16:24.274153  139657 command_runner.go:130] >   seccomp
	I1213 13:16:24.274158  139657 command_runner.go:130] > LDFlags:          unknown
	I1213 13:16:24.274162  139657 command_runner.go:130] > SeccompEnabled:   true
	I1213 13:16:24.274166  139657 command_runner.go:130] > AppArmorEnabled:  false
	I1213 13:16:24.274253  139657 ssh_runner.go:195] Run: crio --version
	I1213 13:16:24.307345  139657 command_runner.go:130] > crio version 1.29.1
	I1213 13:16:24.307372  139657 command_runner.go:130] > Version:        1.29.1
	I1213 13:16:24.307385  139657 command_runner.go:130] > GitCommit:      unknown
	I1213 13:16:24.307390  139657 command_runner.go:130] > GitCommitDate:  unknown
	I1213 13:16:24.307394  139657 command_runner.go:130] > GitTreeState:   clean
	I1213 13:16:24.307400  139657 command_runner.go:130] > BuildDate:      2025-12-13T11:21:09Z
	I1213 13:16:24.307406  139657 command_runner.go:130] > GoVersion:      go1.25.5
	I1213 13:16:24.307412  139657 command_runner.go:130] > Compiler:       gc
	I1213 13:16:24.307419  139657 command_runner.go:130] > Platform:       linux/amd64
	I1213 13:16:24.307425  139657 command_runner.go:130] > Linkmode:       dynamic
	I1213 13:16:24.307436  139657 command_runner.go:130] > BuildTags:      
	I1213 13:16:24.307444  139657 command_runner.go:130] >   containers_image_ostree_stub
	I1213 13:16:24.307453  139657 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1213 13:16:24.307458  139657 command_runner.go:130] >   btrfs_noversion
	I1213 13:16:24.307462  139657 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1213 13:16:24.307468  139657 command_runner.go:130] >   libdm_no_deferred_remove
	I1213 13:16:24.307472  139657 command_runner.go:130] >   seccomp
	I1213 13:16:24.307476  139657 command_runner.go:130] > LDFlags:          unknown
	I1213 13:16:24.307481  139657 command_runner.go:130] > SeccompEnabled:   true
	I1213 13:16:24.307484  139657 command_runner.go:130] > AppArmorEnabled:  false
	I1213 13:16:24.309954  139657 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 13:16:24.314441  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:16:24.314910  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:16:24.314934  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:16:24.315179  139657 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 13:16:24.320471  139657 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1213 13:16:24.320604  139657 kubeadm.go:884] updating cluster {Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:16:24.320792  139657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:16:24.320856  139657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:16:24.358340  139657 command_runner.go:130] > {
	I1213 13:16:24.358367  139657 command_runner.go:130] >   "images":  [
	I1213 13:16:24.358373  139657 command_runner.go:130] >     {
	I1213 13:16:24.358385  139657 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1213 13:16:24.358391  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358399  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 13:16:24.358414  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358422  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358433  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 13:16:24.358445  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1213 13:16:24.358469  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358478  139657 command_runner.go:130] >       "size":  "109379124",
	I1213 13:16:24.358484  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358497  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358504  139657 command_runner.go:130] >     },
	I1213 13:16:24.358509  139657 command_runner.go:130] >     {
	I1213 13:16:24.358519  139657 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1213 13:16:24.358529  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358538  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 13:16:24.358548  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358553  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358565  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1213 13:16:24.358580  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1213 13:16:24.358591  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358598  139657 command_runner.go:130] >       "size":  "31470524",
	I1213 13:16:24.358604  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358617  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358623  139657 command_runner.go:130] >     },
	I1213 13:16:24.358626  139657 command_runner.go:130] >     {
	I1213 13:16:24.358634  139657 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1213 13:16:24.358644  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358653  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1213 13:16:24.358661  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358668  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358685  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1213 13:16:24.358707  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1213 13:16:24.358715  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358721  139657 command_runner.go:130] >       "size":  "76103547",
	I1213 13:16:24.358731  139657 command_runner.go:130] >       "username":  "nonroot",
	I1213 13:16:24.358737  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358744  139657 command_runner.go:130] >     },
	I1213 13:16:24.358748  139657 command_runner.go:130] >     {
	I1213 13:16:24.358757  139657 command_runner.go:130] >       "id":  "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1",
	I1213 13:16:24.358770  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358779  139657 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 13:16:24.358784  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358793  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358810  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 13:16:24.358823  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"
	I1213 13:16:24.358828  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358834  139657 command_runner.go:130] >       "size":  "63585106",
	I1213 13:16:24.358840  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.358849  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.358855  139657 command_runner.go:130] >       },
	I1213 13:16:24.358875  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358883  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358889  139657 command_runner.go:130] >     },
	I1213 13:16:24.358896  139657 command_runner.go:130] >     {
	I1213 13:16:24.358905  139657 command_runner.go:130] >       "id":  "a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85",
	I1213 13:16:24.358911  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358918  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.2"
	I1213 13:16:24.358926  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358933  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358946  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077",
	I1213 13:16:24.358960  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"
	I1213 13:16:24.358967  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358974  139657 command_runner.go:130] >       "size":  "89046001",
	I1213 13:16:24.358982  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.358987  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.358995  139657 command_runner.go:130] >       },
	I1213 13:16:24.359001  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359010  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359016  139657 command_runner.go:130] >     },
	I1213 13:16:24.359025  139657 command_runner.go:130] >     {
	I1213 13:16:24.359035  139657 command_runner.go:130] >       "id":  "01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8",
	I1213 13:16:24.359045  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359060  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.2"
	I1213 13:16:24.359103  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359117  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359130  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb",
	I1213 13:16:24.359145  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"
	I1213 13:16:24.359151  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359158  139657 command_runner.go:130] >       "size":  "76004183",
	I1213 13:16:24.359164  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359169  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.359177  139657 command_runner.go:130] >       },
	I1213 13:16:24.359182  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359190  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359196  139657 command_runner.go:130] >     },
	I1213 13:16:24.359201  139657 command_runner.go:130] >     {
	I1213 13:16:24.359218  139657 command_runner.go:130] >       "id":  "8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45",
	I1213 13:16:24.359228  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359235  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.2"
	I1213 13:16:24.359243  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359251  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359266  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74",
	I1213 13:16:24.359281  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"
	I1213 13:16:24.359291  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359298  139657 command_runner.go:130] >       "size":  "73145240",
	I1213 13:16:24.359307  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359314  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359323  139657 command_runner.go:130] >     },
	I1213 13:16:24.359328  139657 command_runner.go:130] >     {
	I1213 13:16:24.359338  139657 command_runner.go:130] >       "id":  "88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952",
	I1213 13:16:24.359344  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359350  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.2"
	I1213 13:16:24.359355  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359359  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359366  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6",
	I1213 13:16:24.359407  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"
	I1213 13:16:24.359414  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359418  139657 command_runner.go:130] >       "size":  "53848919",
	I1213 13:16:24.359422  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359425  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.359428  139657 command_runner.go:130] >       },
	I1213 13:16:24.359432  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359439  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359442  139657 command_runner.go:130] >     },
	I1213 13:16:24.359445  139657 command_runner.go:130] >     {
	I1213 13:16:24.359453  139657 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1213 13:16:24.359457  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359463  139657 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.359466  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359470  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359478  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 13:16:24.359485  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1213 13:16:24.359490  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359494  139657 command_runner.go:130] >       "size":  "742092",
	I1213 13:16:24.359497  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359501  139657 command_runner.go:130] >         "value":  "65535"
	I1213 13:16:24.359506  139657 command_runner.go:130] >       },
	I1213 13:16:24.359510  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359514  139657 command_runner.go:130] >       "pinned":  true
	I1213 13:16:24.359519  139657 command_runner.go:130] >     }
	I1213 13:16:24.359522  139657 command_runner.go:130] >   ]
	I1213 13:16:24.359525  139657 command_runner.go:130] > }
	I1213 13:16:24.360333  139657 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:16:24.360355  139657 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:16:24.360418  139657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:16:24.392193  139657 command_runner.go:130] > {
	I1213 13:16:24.392217  139657 command_runner.go:130] >   "images":  [
	I1213 13:16:24.392221  139657 command_runner.go:130] >     {
	I1213 13:16:24.392229  139657 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1213 13:16:24.392236  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392246  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 13:16:24.392257  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392268  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392284  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 13:16:24.392297  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1213 13:16:24.392305  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392314  139657 command_runner.go:130] >       "size":  "109379124",
	I1213 13:16:24.392328  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392335  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392339  139657 command_runner.go:130] >     },
	I1213 13:16:24.392344  139657 command_runner.go:130] >     {
	I1213 13:16:24.392351  139657 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1213 13:16:24.392357  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392364  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 13:16:24.392372  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392379  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392393  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1213 13:16:24.392409  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1213 13:16:24.392417  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392423  139657 command_runner.go:130] >       "size":  "31470524",
	I1213 13:16:24.392430  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392438  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392443  139657 command_runner.go:130] >     },
	I1213 13:16:24.392447  139657 command_runner.go:130] >     {
	I1213 13:16:24.392456  139657 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1213 13:16:24.392462  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392467  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1213 13:16:24.392472  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392478  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392492  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1213 13:16:24.392507  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1213 13:16:24.392518  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392527  139657 command_runner.go:130] >       "size":  "76103547",
	I1213 13:16:24.392537  139657 command_runner.go:130] >       "username":  "nonroot",
	I1213 13:16:24.392545  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392548  139657 command_runner.go:130] >     },
	I1213 13:16:24.392551  139657 command_runner.go:130] >     {
	I1213 13:16:24.392557  139657 command_runner.go:130] >       "id":  "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1",
	I1213 13:16:24.392564  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392579  139657 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 13:16:24.392592  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392603  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392617  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 13:16:24.392633  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"
	I1213 13:16:24.392645  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392654  139657 command_runner.go:130] >       "size":  "63585106",
	I1213 13:16:24.392663  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392673  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392679  139657 command_runner.go:130] >       },
	I1213 13:16:24.392690  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392698  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392706  139657 command_runner.go:130] >     },
	I1213 13:16:24.392712  139657 command_runner.go:130] >     {
	I1213 13:16:24.392724  139657 command_runner.go:130] >       "id":  "a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85",
	I1213 13:16:24.392734  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392746  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.2"
	I1213 13:16:24.392754  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392761  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392775  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077",
	I1213 13:16:24.392788  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"
	I1213 13:16:24.392794  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392800  139657 command_runner.go:130] >       "size":  "89046001",
	I1213 13:16:24.392808  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392818  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392826  139657 command_runner.go:130] >       },
	I1213 13:16:24.392833  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392843  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392852  139657 command_runner.go:130] >     },
	I1213 13:16:24.392856  139657 command_runner.go:130] >     {
	I1213 13:16:24.392868  139657 command_runner.go:130] >       "id":  "01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8",
	I1213 13:16:24.392876  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392888  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.2"
	I1213 13:16:24.392895  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392909  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392924  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb",
	I1213 13:16:24.392940  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"
	I1213 13:16:24.392949  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392959  139657 command_runner.go:130] >       "size":  "76004183",
	I1213 13:16:24.392967  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392977  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392985  139657 command_runner.go:130] >       },
	I1213 13:16:24.392992  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393001  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393007  139657 command_runner.go:130] >     },
	I1213 13:16:24.393011  139657 command_runner.go:130] >     {
	I1213 13:16:24.393021  139657 command_runner.go:130] >       "id":  "8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45",
	I1213 13:16:24.393031  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393042  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.2"
	I1213 13:16:24.393048  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393058  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393089  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74",
	I1213 13:16:24.393113  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"
	I1213 13:16:24.393119  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393123  139657 command_runner.go:130] >       "size":  "73145240",
	I1213 13:16:24.393133  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393140  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393145  139657 command_runner.go:130] >     },
	I1213 13:16:24.393150  139657 command_runner.go:130] >     {
	I1213 13:16:24.393160  139657 command_runner.go:130] >       "id":  "88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952",
	I1213 13:16:24.393167  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393174  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.2"
	I1213 13:16:24.393179  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393186  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393197  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6",
	I1213 13:16:24.393226  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"
	I1213 13:16:24.393232  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393246  139657 command_runner.go:130] >       "size":  "53848919",
	I1213 13:16:24.393251  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.393257  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.393262  139657 command_runner.go:130] >       },
	I1213 13:16:24.393267  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393274  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393281  139657 command_runner.go:130] >     },
	I1213 13:16:24.393286  139657 command_runner.go:130] >     {
	I1213 13:16:24.393296  139657 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1213 13:16:24.393300  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393305  139657 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.393311  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393319  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393333  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 13:16:24.393349  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1213 13:16:24.393357  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393367  139657 command_runner.go:130] >       "size":  "742092",
	I1213 13:16:24.393376  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.393383  139657 command_runner.go:130] >         "value":  "65535"
	I1213 13:16:24.393390  139657 command_runner.go:130] >       },
	I1213 13:16:24.393396  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393405  139657 command_runner.go:130] >       "pinned":  true
	I1213 13:16:24.393408  139657 command_runner.go:130] >     }
	I1213 13:16:24.393416  139657 command_runner.go:130] >   ]
	I1213 13:16:24.393422  139657 command_runner.go:130] > }
	I1213 13:16:24.393572  139657 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:16:24.393595  139657 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:16:24.393606  139657 kubeadm.go:935] updating node { 192.168.39.124 8441 v1.34.2 crio true true} ...
	I1213 13:16:24.393771  139657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101171 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:16:24.393855  139657 ssh_runner.go:195] Run: crio config
	I1213 13:16:24.427284  139657 command_runner.go:130] ! time="2025-12-13 13:16:24.422256723Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1213 13:16:24.433797  139657 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 13:16:24.439545  139657 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 13:16:24.439572  139657 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 13:16:24.439581  139657 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 13:16:24.439585  139657 command_runner.go:130] > #
	I1213 13:16:24.439594  139657 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 13:16:24.439602  139657 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 13:16:24.439611  139657 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 13:16:24.439629  139657 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 13:16:24.439638  139657 command_runner.go:130] > # reload'.
	I1213 13:16:24.439648  139657 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 13:16:24.439661  139657 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 13:16:24.439675  139657 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 13:16:24.439687  139657 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 13:16:24.439693  139657 command_runner.go:130] > [crio]
	I1213 13:16:24.439704  139657 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 13:16:24.439712  139657 command_runner.go:130] > # containers images, in this directory.
	I1213 13:16:24.439720  139657 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1213 13:16:24.439738  139657 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 13:16:24.439749  139657 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1213 13:16:24.439761  139657 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 13:16:24.439771  139657 command_runner.go:130] > # imagestore = ""
	I1213 13:16:24.439781  139657 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 13:16:24.439794  139657 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 13:16:24.439803  139657 command_runner.go:130] > # storage_driver = "overlay"
	I1213 13:16:24.439813  139657 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 13:16:24.439825  139657 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 13:16:24.439832  139657 command_runner.go:130] > storage_option = [
	I1213 13:16:24.439844  139657 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1213 13:16:24.439852  139657 command_runner.go:130] > ]
	I1213 13:16:24.439861  139657 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 13:16:24.439872  139657 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 13:16:24.439882  139657 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 13:16:24.439891  139657 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 13:16:24.439911  139657 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 13:16:24.439921  139657 command_runner.go:130] > # always happen on a node reboot
	I1213 13:16:24.439930  139657 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 13:16:24.439952  139657 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 13:16:24.439965  139657 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 13:16:24.439979  139657 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 13:16:24.439990  139657 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1213 13:16:24.440002  139657 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 13:16:24.440018  139657 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 13:16:24.440026  139657 command_runner.go:130] > # internal_wipe = true
	I1213 13:16:24.440039  139657 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 13:16:24.440051  139657 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 13:16:24.440059  139657 command_runner.go:130] > # internal_repair = false
	I1213 13:16:24.440068  139657 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 13:16:24.440095  139657 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 13:16:24.440115  139657 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 13:16:24.440127  139657 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 13:16:24.440141  139657 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 13:16:24.440150  139657 command_runner.go:130] > [crio.api]
	I1213 13:16:24.440158  139657 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 13:16:24.440169  139657 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 13:16:24.440178  139657 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 13:16:24.440188  139657 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 13:16:24.440198  139657 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 13:16:24.440210  139657 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 13:16:24.440217  139657 command_runner.go:130] > # stream_port = "0"
	I1213 13:16:24.440227  139657 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 13:16:24.440235  139657 command_runner.go:130] > # stream_enable_tls = false
	I1213 13:16:24.440245  139657 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 13:16:24.440256  139657 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 13:16:24.440267  139657 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 13:16:24.440289  139657 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1213 13:16:24.440298  139657 command_runner.go:130] > # minutes.
	I1213 13:16:24.440313  139657 command_runner.go:130] > # stream_tls_cert = ""
	I1213 13:16:24.440341  139657 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 13:16:24.440355  139657 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1213 13:16:24.440363  139657 command_runner.go:130] > # stream_tls_key = ""
	I1213 13:16:24.440375  139657 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 13:16:24.440386  139657 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 13:16:24.440416  139657 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1213 13:16:24.440425  139657 command_runner.go:130] > # stream_tls_ca = ""
	I1213 13:16:24.440437  139657 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 13:16:24.440447  139657 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1213 13:16:24.440460  139657 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 13:16:24.440470  139657 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1213 13:16:24.440480  139657 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 13:16:24.440492  139657 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 13:16:24.440498  139657 command_runner.go:130] > [crio.runtime]
	I1213 13:16:24.440510  139657 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 13:16:24.440519  139657 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 13:16:24.440528  139657 command_runner.go:130] > # "nofile=1024:2048"
	I1213 13:16:24.440538  139657 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 13:16:24.440547  139657 command_runner.go:130] > # default_ulimits = [
	I1213 13:16:24.440553  139657 command_runner.go:130] > # ]
	I1213 13:16:24.440565  139657 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 13:16:24.440572  139657 command_runner.go:130] > # no_pivot = false
	I1213 13:16:24.440582  139657 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 13:16:24.440592  139657 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 13:16:24.440603  139657 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 13:16:24.440612  139657 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 13:16:24.440623  139657 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 13:16:24.440635  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 13:16:24.440644  139657 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1213 13:16:24.440652  139657 command_runner.go:130] > # Cgroup setting for conmon
	I1213 13:16:24.440664  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 13:16:24.440672  139657 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 13:16:24.440690  139657 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 13:16:24.440701  139657 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 13:16:24.440713  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 13:16:24.440726  139657 command_runner.go:130] > conmon_env = [
	I1213 13:16:24.440736  139657 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1213 13:16:24.440743  139657 command_runner.go:130] > ]
	I1213 13:16:24.440753  139657 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 13:16:24.440764  139657 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 13:16:24.440774  139657 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 13:16:24.440783  139657 command_runner.go:130] > # default_env = [
	I1213 13:16:24.440788  139657 command_runner.go:130] > # ]
	I1213 13:16:24.440801  139657 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 13:16:24.440813  139657 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 13:16:24.440822  139657 command_runner.go:130] > # selinux = false
	I1213 13:16:24.440831  139657 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 13:16:24.440844  139657 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1213 13:16:24.440853  139657 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1213 13:16:24.440860  139657 command_runner.go:130] > # seccomp_profile = ""
	I1213 13:16:24.440868  139657 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1213 13:16:24.440877  139657 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1213 13:16:24.440888  139657 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1213 13:16:24.440896  139657 command_runner.go:130] > # which might increase security.
	I1213 13:16:24.440904  139657 command_runner.go:130] > # This option is currently deprecated,
	I1213 13:16:24.440914  139657 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1213 13:16:24.440925  139657 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1213 13:16:24.440935  139657 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 13:16:24.440949  139657 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 13:16:24.440961  139657 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 13:16:24.440972  139657 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 13:16:24.440982  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.440989  139657 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 13:16:24.441001  139657 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 13:16:24.441008  139657 command_runner.go:130] > # the cgroup blockio controller.
	I1213 13:16:24.441025  139657 command_runner.go:130] > # blockio_config_file = ""
	I1213 13:16:24.441040  139657 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 13:16:24.441047  139657 command_runner.go:130] > # blockio parameters.
	I1213 13:16:24.441054  139657 command_runner.go:130] > # blockio_reload = false
	I1213 13:16:24.441065  139657 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 13:16:24.441088  139657 command_runner.go:130] > # irqbalance daemon.
	I1213 13:16:24.441100  139657 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 13:16:24.441116  139657 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 13:16:24.441138  139657 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 13:16:24.441152  139657 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 13:16:24.441171  139657 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 13:16:24.441183  139657 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 13:16:24.441194  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.441201  139657 command_runner.go:130] > # rdt_config_file = ""
	I1213 13:16:24.441210  139657 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 13:16:24.441217  139657 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 13:16:24.441272  139657 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 13:16:24.441283  139657 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 13:16:24.441291  139657 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 13:16:24.441300  139657 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 13:16:24.441306  139657 command_runner.go:130] > # will be added.
	I1213 13:16:24.441314  139657 command_runner.go:130] > # default_capabilities = [
	I1213 13:16:24.441320  139657 command_runner.go:130] > # 	"CHOWN",
	I1213 13:16:24.441328  139657 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 13:16:24.441334  139657 command_runner.go:130] > # 	"FSETID",
	I1213 13:16:24.441341  139657 command_runner.go:130] > # 	"FOWNER",
	I1213 13:16:24.441347  139657 command_runner.go:130] > # 	"SETGID",
	I1213 13:16:24.441355  139657 command_runner.go:130] > # 	"SETUID",
	I1213 13:16:24.441361  139657 command_runner.go:130] > # 	"SETPCAP",
	I1213 13:16:24.441368  139657 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 13:16:24.441375  139657 command_runner.go:130] > # 	"KILL",
	I1213 13:16:24.441381  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441394  139657 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 13:16:24.441414  139657 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 13:16:24.441425  139657 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 13:16:24.441436  139657 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 13:16:24.441449  139657 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 13:16:24.441457  139657 command_runner.go:130] > default_sysctls = [
	I1213 13:16:24.441465  139657 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 13:16:24.441471  139657 command_runner.go:130] > ]
	I1213 13:16:24.441479  139657 command_runner.go:130] > # List of devices on the host that a
	I1213 13:16:24.441492  139657 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 13:16:24.441499  139657 command_runner.go:130] > # allowed_devices = [
	I1213 13:16:24.441514  139657 command_runner.go:130] > # 	"/dev/fuse",
	I1213 13:16:24.441521  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441529  139657 command_runner.go:130] > # List of additional devices. specified as
	I1213 13:16:24.441544  139657 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 13:16:24.441554  139657 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 13:16:24.441563  139657 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 13:16:24.441577  139657 command_runner.go:130] > # additional_devices = [
	I1213 13:16:24.441583  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441592  139657 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 13:16:24.441599  139657 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 13:16:24.441606  139657 command_runner.go:130] > # 	"/etc/cdi",
	I1213 13:16:24.441615  139657 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 13:16:24.441620  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441631  139657 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 13:16:24.441644  139657 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 13:16:24.441653  139657 command_runner.go:130] > # Defaults to false.
	I1213 13:16:24.441661  139657 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 13:16:24.441674  139657 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 13:16:24.441685  139657 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 13:16:24.441694  139657 command_runner.go:130] > # hooks_dir = [
	I1213 13:16:24.441700  139657 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 13:16:24.441707  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441719  139657 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 13:16:24.441739  139657 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 13:16:24.441751  139657 command_runner.go:130] > # its default mounts from the following two files:
	I1213 13:16:24.441757  139657 command_runner.go:130] > #
	I1213 13:16:24.441770  139657 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 13:16:24.441780  139657 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 13:16:24.441791  139657 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 13:16:24.441797  139657 command_runner.go:130] > #
	I1213 13:16:24.441809  139657 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 13:16:24.441819  139657 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 13:16:24.441832  139657 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 13:16:24.441841  139657 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 13:16:24.441849  139657 command_runner.go:130] > #
	I1213 13:16:24.441856  139657 command_runner.go:130] > # default_mounts_file = ""
	I1213 13:16:24.441866  139657 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 13:16:24.441877  139657 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 13:16:24.441886  139657 command_runner.go:130] > pids_limit = 1024
	I1213 13:16:24.441896  139657 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 13:16:24.441906  139657 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 13:16:24.441917  139657 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 13:16:24.441931  139657 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 13:16:24.441941  139657 command_runner.go:130] > # log_size_max = -1
	I1213 13:16:24.441953  139657 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 13:16:24.441963  139657 command_runner.go:130] > # log_to_journald = false
	I1213 13:16:24.441977  139657 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 13:16:24.441987  139657 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 13:16:24.441995  139657 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 13:16:24.442006  139657 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 13:16:24.442015  139657 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 13:16:24.442024  139657 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 13:16:24.442034  139657 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 13:16:24.442042  139657 command_runner.go:130] > # read_only = false
	I1213 13:16:24.442052  139657 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 13:16:24.442065  139657 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 13:16:24.442093  139657 command_runner.go:130] > # live configuration reload.
	I1213 13:16:24.442101  139657 command_runner.go:130] > # log_level = "info"
	I1213 13:16:24.442120  139657 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 13:16:24.442131  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.442139  139657 command_runner.go:130] > # log_filter = ""
	I1213 13:16:24.442149  139657 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 13:16:24.442163  139657 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 13:16:24.442172  139657 command_runner.go:130] > # separated by comma.
	I1213 13:16:24.442185  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442194  139657 command_runner.go:130] > # uid_mappings = ""
	I1213 13:16:24.442205  139657 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 13:16:24.442218  139657 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 13:16:24.442227  139657 command_runner.go:130] > # separated by comma.
	I1213 13:16:24.442244  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442254  139657 command_runner.go:130] > # gid_mappings = ""
	I1213 13:16:24.442264  139657 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 13:16:24.442277  139657 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 13:16:24.442289  139657 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 13:16:24.442302  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442310  139657 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 13:16:24.442320  139657 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 13:16:24.442333  139657 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 13:16:24.442344  139657 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 13:16:24.442357  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442364  139657 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 13:16:24.442373  139657 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 13:16:24.442391  139657 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 13:16:24.442402  139657 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 13:16:24.442409  139657 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 13:16:24.442419  139657 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 13:16:24.442430  139657 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 13:16:24.442441  139657 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 13:16:24.442450  139657 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 13:16:24.442467  139657 command_runner.go:130] > drop_infra_ctr = false
	I1213 13:16:24.442479  139657 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 13:16:24.442489  139657 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 13:16:24.442503  139657 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 13:16:24.442510  139657 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 13:16:24.442523  139657 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 13:16:24.442534  139657 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 13:16:24.442546  139657 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 13:16:24.442554  139657 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 13:16:24.442563  139657 command_runner.go:130] > # shared_cpuset = ""
	I1213 13:16:24.442572  139657 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 13:16:24.442581  139657 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 13:16:24.442589  139657 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 13:16:24.442601  139657 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 13:16:24.442608  139657 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1213 13:16:24.442618  139657 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 13:16:24.442631  139657 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 13:16:24.442640  139657 command_runner.go:130] > # enable_criu_support = false
	I1213 13:16:24.442650  139657 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 13:16:24.442660  139657 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 13:16:24.442667  139657 command_runner.go:130] > # enable_pod_events = false
	I1213 13:16:24.442677  139657 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 13:16:24.442688  139657 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 13:16:24.442699  139657 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 13:16:24.442706  139657 command_runner.go:130] > # default_runtime = "runc"
	I1213 13:16:24.442715  139657 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 13:16:24.442726  139657 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 13:16:24.442741  139657 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 13:16:24.442756  139657 command_runner.go:130] > # creation as a file is not desired either.
	I1213 13:16:24.442774  139657 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 13:16:24.442784  139657 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 13:16:24.442792  139657 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 13:16:24.442797  139657 command_runner.go:130] > # ]
	I1213 13:16:24.442815  139657 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 13:16:24.442828  139657 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 13:16:24.442840  139657 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 13:16:24.442851  139657 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 13:16:24.442856  139657 command_runner.go:130] > #
	I1213 13:16:24.442865  139657 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 13:16:24.442873  139657 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 13:16:24.442881  139657 command_runner.go:130] > # runtime_type = "oci"
	I1213 13:16:24.442949  139657 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 13:16:24.442960  139657 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 13:16:24.442967  139657 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 13:16:24.442973  139657 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 13:16:24.442978  139657 command_runner.go:130] > # monitor_env = []
	I1213 13:16:24.442986  139657 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 13:16:24.442993  139657 command_runner.go:130] > # allowed_annotations = []
	I1213 13:16:24.443003  139657 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 13:16:24.443012  139657 command_runner.go:130] > # Where:
	I1213 13:16:24.443020  139657 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 13:16:24.443031  139657 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 13:16:24.443049  139657 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 13:16:24.443061  139657 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 13:16:24.443080  139657 command_runner.go:130] > #   in $PATH.
	I1213 13:16:24.443104  139657 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 13:16:24.443121  139657 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 13:16:24.443132  139657 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 13:16:24.443140  139657 command_runner.go:130] > #   state.
	I1213 13:16:24.443151  139657 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 13:16:24.443162  139657 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 13:16:24.443173  139657 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 13:16:24.443185  139657 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 13:16:24.443195  139657 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 13:16:24.443209  139657 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 13:16:24.443220  139657 command_runner.go:130] > #   The currently recognized values are:
	I1213 13:16:24.443242  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 13:16:24.443258  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 13:16:24.443270  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 13:16:24.443280  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 13:16:24.443293  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 13:16:24.443305  139657 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 13:16:24.443319  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 13:16:24.443332  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 13:16:24.443342  139657 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 13:16:24.443354  139657 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 13:16:24.443362  139657 command_runner.go:130] > #   deprecated option "conmon".
	I1213 13:16:24.443374  139657 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 13:16:24.443385  139657 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 13:16:24.443397  139657 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 13:16:24.443407  139657 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 13:16:24.443418  139657 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1213 13:16:24.443429  139657 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 13:16:24.443440  139657 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 13:16:24.443452  139657 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 13:16:24.443457  139657 command_runner.go:130] > #
	I1213 13:16:24.443467  139657 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 13:16:24.443473  139657 command_runner.go:130] > #
	I1213 13:16:24.443482  139657 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 13:16:24.443496  139657 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 13:16:24.443504  139657 command_runner.go:130] > #
	I1213 13:16:24.443514  139657 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 13:16:24.443525  139657 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 13:16:24.443533  139657 command_runner.go:130] > #
	I1213 13:16:24.443544  139657 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 13:16:24.443550  139657 command_runner.go:130] > # feature.
	I1213 13:16:24.443555  139657 command_runner.go:130] > #
	I1213 13:16:24.443567  139657 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 13:16:24.443577  139657 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 13:16:24.443598  139657 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 13:16:24.443613  139657 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 13:16:24.443628  139657 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 13:16:24.443636  139657 command_runner.go:130] > #
	I1213 13:16:24.443646  139657 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 13:16:24.443659  139657 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 13:16:24.443667  139657 command_runner.go:130] > #
	I1213 13:16:24.443676  139657 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 13:16:24.443688  139657 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 13:16:24.443694  139657 command_runner.go:130] > #
	I1213 13:16:24.443705  139657 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 13:16:24.443718  139657 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 13:16:24.443725  139657 command_runner.go:130] > # limitation.
	I1213 13:16:24.443734  139657 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 13:16:24.443740  139657 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1213 13:16:24.443747  139657 command_runner.go:130] > runtime_type = "oci"
	I1213 13:16:24.443755  139657 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 13:16:24.443766  139657 command_runner.go:130] > runtime_config_path = ""
	I1213 13:16:24.443773  139657 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1213 13:16:24.443779  139657 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 13:16:24.443786  139657 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 13:16:24.443792  139657 command_runner.go:130] > monitor_env = [
	I1213 13:16:24.443802  139657 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1213 13:16:24.443810  139657 command_runner.go:130] > ]
	I1213 13:16:24.443818  139657 command_runner.go:130] > privileged_without_host_devices = false
	I1213 13:16:24.443830  139657 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 13:16:24.443839  139657 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 13:16:24.443849  139657 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 13:16:24.443863  139657 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 13:16:24.443876  139657 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1213 13:16:24.443887  139657 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 13:16:24.443903  139657 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 13:16:24.443918  139657 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 13:16:24.443936  139657 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 13:16:24.443950  139657 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 13:16:24.443956  139657 command_runner.go:130] > # Example:
	I1213 13:16:24.443964  139657 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 13:16:24.443971  139657 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 13:16:24.443984  139657 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 13:16:24.443994  139657 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 13:16:24.444004  139657 command_runner.go:130] > # cpuset = 0
	I1213 13:16:24.444013  139657 command_runner.go:130] > # cpushares = "0-1"
	I1213 13:16:24.444019  139657 command_runner.go:130] > # Where:
	I1213 13:16:24.444027  139657 command_runner.go:130] > # The workload name is workload-type.
	I1213 13:16:24.444038  139657 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 13:16:24.444050  139657 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 13:16:24.444060  139657 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 13:16:24.444086  139657 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 13:16:24.444097  139657 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 13:16:24.444112  139657 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 13:16:24.444127  139657 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 13:16:24.444136  139657 command_runner.go:130] > # Default value is set to true
	I1213 13:16:24.444143  139657 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 13:16:24.444152  139657 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 13:16:24.444162  139657 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 13:16:24.444170  139657 command_runner.go:130] > # Default value is set to 'false'
	I1213 13:16:24.444179  139657 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 13:16:24.444194  139657 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 13:16:24.444202  139657 command_runner.go:130] > #
	I1213 13:16:24.444212  139657 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 13:16:24.444227  139657 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1213 13:16:24.444240  139657 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1213 13:16:24.444250  139657 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1213 13:16:24.444260  139657 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1213 13:16:24.444277  139657 command_runner.go:130] > [crio.image]
	I1213 13:16:24.444290  139657 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 13:16:24.444308  139657 command_runner.go:130] > # default_transport = "docker://"
	I1213 13:16:24.444322  139657 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 13:16:24.444336  139657 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 13:16:24.444346  139657 command_runner.go:130] > # global_auth_file = ""
	I1213 13:16:24.444357  139657 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 13:16:24.444366  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.444377  139657 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.444388  139657 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 13:16:24.444401  139657 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 13:16:24.444411  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.444418  139657 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 13:16:24.444432  139657 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 13:16:24.444443  139657 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 13:16:24.444456  139657 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 13:16:24.444465  139657 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 13:16:24.444475  139657 command_runner.go:130] > # pause_command = "/pause"
	I1213 13:16:24.444485  139657 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 13:16:24.444498  139657 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 13:16:24.444510  139657 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 13:16:24.444522  139657 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 13:16:24.444533  139657 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 13:16:24.444547  139657 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 13:16:24.444555  139657 command_runner.go:130] > # pinned_images = [
	I1213 13:16:24.444560  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444570  139657 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 13:16:24.444583  139657 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 13:16:24.444593  139657 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 13:16:24.444612  139657 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 13:16:24.444624  139657 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 13:16:24.444632  139657 command_runner.go:130] > # signature_policy = ""
	I1213 13:16:24.444644  139657 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 13:16:24.444655  139657 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 13:16:24.444668  139657 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 13:16:24.444686  139657 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 13:16:24.444698  139657 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 13:16:24.444707  139657 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 13:16:24.444717  139657 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 13:16:24.444730  139657 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 13:16:24.444737  139657 command_runner.go:130] > # changing them here.
	I1213 13:16:24.444744  139657 command_runner.go:130] > # insecure_registries = [
	I1213 13:16:24.444749  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444762  139657 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 13:16:24.444771  139657 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 13:16:24.444780  139657 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 13:16:24.444788  139657 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 13:16:24.444796  139657 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 13:16:24.444807  139657 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 13:16:24.444818  139657 command_runner.go:130] > # CNI plugins.
	I1213 13:16:24.444827  139657 command_runner.go:130] > [crio.network]
	I1213 13:16:24.444837  139657 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 13:16:24.444847  139657 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 13:16:24.444854  139657 command_runner.go:130] > # cni_default_network = ""
	I1213 13:16:24.444863  139657 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 13:16:24.444871  139657 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 13:16:24.444880  139657 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 13:16:24.444887  139657 command_runner.go:130] > # plugin_dirs = [
	I1213 13:16:24.444894  139657 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 13:16:24.444898  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444913  139657 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 13:16:24.444923  139657 command_runner.go:130] > [crio.metrics]
	I1213 13:16:24.444931  139657 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 13:16:24.444941  139657 command_runner.go:130] > enable_metrics = true
	I1213 13:16:24.444949  139657 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 13:16:24.444959  139657 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 13:16:24.444971  139657 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 13:16:24.444984  139657 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 13:16:24.445004  139657 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 13:16:24.445013  139657 command_runner.go:130] > # metrics_collectors = [
	I1213 13:16:24.445020  139657 command_runner.go:130] > # 	"operations",
	I1213 13:16:24.445031  139657 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1213 13:16:24.445038  139657 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1213 13:16:24.445045  139657 command_runner.go:130] > # 	"operations_errors",
	I1213 13:16:24.445052  139657 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1213 13:16:24.445060  139657 command_runner.go:130] > # 	"image_pulls_by_name",
	I1213 13:16:24.445068  139657 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1213 13:16:24.445085  139657 command_runner.go:130] > # 	"image_pulls_failures",
	I1213 13:16:24.445092  139657 command_runner.go:130] > # 	"image_pulls_successes",
	I1213 13:16:24.445099  139657 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 13:16:24.445110  139657 command_runner.go:130] > # 	"image_layer_reuse",
	I1213 13:16:24.445121  139657 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 13:16:24.445128  139657 command_runner.go:130] > # 	"containers_oom_total",
	I1213 13:16:24.445134  139657 command_runner.go:130] > # 	"containers_oom",
	I1213 13:16:24.445141  139657 command_runner.go:130] > # 	"processes_defunct",
	I1213 13:16:24.445147  139657 command_runner.go:130] > # 	"operations_total",
	I1213 13:16:24.445155  139657 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 13:16:24.445163  139657 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 13:16:24.445170  139657 command_runner.go:130] > # 	"operations_errors_total",
	I1213 13:16:24.445178  139657 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 13:16:24.445186  139657 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 13:16:24.445194  139657 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 13:16:24.445202  139657 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 13:16:24.445210  139657 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 13:16:24.445218  139657 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 13:16:24.445231  139657 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 13:16:24.445238  139657 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 13:16:24.445244  139657 command_runner.go:130] > # ]
	I1213 13:16:24.445253  139657 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 13:16:24.445259  139657 command_runner.go:130] > # metrics_port = 9090
	I1213 13:16:24.445268  139657 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 13:16:24.445284  139657 command_runner.go:130] > # metrics_socket = ""
	I1213 13:16:24.445295  139657 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 13:16:24.445306  139657 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 13:16:24.445319  139657 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 13:16:24.445328  139657 command_runner.go:130] > # certificate on any modification event.
	I1213 13:16:24.445335  139657 command_runner.go:130] > # metrics_cert = ""
	I1213 13:16:24.445344  139657 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 13:16:24.445355  139657 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 13:16:24.445360  139657 command_runner.go:130] > # metrics_key = ""
	I1213 13:16:24.445370  139657 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 13:16:24.445379  139657 command_runner.go:130] > [crio.tracing]
	I1213 13:16:24.445387  139657 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 13:16:24.445394  139657 command_runner.go:130] > # enable_tracing = false
	I1213 13:16:24.445403  139657 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 13:16:24.445413  139657 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1213 13:16:24.445424  139657 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 13:16:24.445435  139657 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 13:16:24.445444  139657 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 13:16:24.445450  139657 command_runner.go:130] > [crio.nri]
	I1213 13:16:24.445457  139657 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 13:16:24.445465  139657 command_runner.go:130] > # enable_nri = false
	I1213 13:16:24.445471  139657 command_runner.go:130] > # NRI socket to listen on.
	I1213 13:16:24.445479  139657 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 13:16:24.445490  139657 command_runner.go:130] > # NRI plugin directory to use.
	I1213 13:16:24.445498  139657 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 13:16:24.445509  139657 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 13:16:24.445518  139657 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 13:16:24.445528  139657 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 13:16:24.445539  139657 command_runner.go:130] > # nri_disable_connections = false
	I1213 13:16:24.445548  139657 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 13:16:24.445556  139657 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 13:16:24.445564  139657 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 13:16:24.445572  139657 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 13:16:24.445606  139657 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 13:16:24.445616  139657 command_runner.go:130] > [crio.stats]
	I1213 13:16:24.445625  139657 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 13:16:24.445640  139657 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 13:16:24.445648  139657 command_runner.go:130] > # stats_collection_period = 0
	I1213 13:16:24.445769  139657 cni.go:84] Creating CNI manager for ""
	I1213 13:16:24.445787  139657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:16:24.445812  139657 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:16:24.445847  139657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.124 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101171 NodeName:functional-101171 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:16:24.446054  139657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.124
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-101171"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.124"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.124"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:16:24.446191  139657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:16:24.458394  139657 command_runner.go:130] > kubeadm
	I1213 13:16:24.458424  139657 command_runner.go:130] > kubectl
	I1213 13:16:24.458446  139657 command_runner.go:130] > kubelet
	I1213 13:16:24.458789  139657 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:16:24.458853  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:16:24.471347  139657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1213 13:16:24.493805  139657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:16:24.515984  139657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1213 13:16:24.538444  139657 ssh_runner.go:195] Run: grep 192.168.39.124	control-plane.minikube.internal$ /etc/hosts
	I1213 13:16:24.543369  139657 command_runner.go:130] > 192.168.39.124	control-plane.minikube.internal
	I1213 13:16:24.543465  139657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:16:24.727714  139657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:16:24.748340  139657 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171 for IP: 192.168.39.124
	I1213 13:16:24.748371  139657 certs.go:195] generating shared ca certs ...
	I1213 13:16:24.748391  139657 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:16:24.748616  139657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
	I1213 13:16:24.748684  139657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
	I1213 13:16:24.748697  139657 certs.go:257] generating profile certs ...
	I1213 13:16:24.748799  139657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/client.key
	I1213 13:16:24.748886  139657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key.194f038f
	I1213 13:16:24.748927  139657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key
	I1213 13:16:24.748940  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 13:16:24.748961  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 13:16:24.748976  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 13:16:24.748999  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 13:16:24.749016  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 13:16:24.749031  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 13:16:24.749046  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 13:16:24.749066  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 13:16:24.749158  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem (1338 bytes)
	W1213 13:16:24.749196  139657 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234_empty.pem, impossibly tiny 0 bytes
	I1213 13:16:24.749208  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:16:24.749236  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:16:24.749267  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:16:24.749300  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
	I1213 13:16:24.749360  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 13:16:24.749402  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:24.749419  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem -> /usr/share/ca-certificates/135234.pem
	I1213 13:16:24.749434  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> /usr/share/ca-certificates/1352342.pem
	I1213 13:16:24.750215  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:16:24.784325  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 13:16:24.817785  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:16:24.853144  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 13:16:24.890536  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:16:24.926567  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:16:24.962010  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:16:24.998369  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:16:25.032230  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:16:25.068964  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem --> /usr/share/ca-certificates/135234.pem (1338 bytes)
	I1213 13:16:25.102766  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /usr/share/ca-certificates/1352342.pem (1708 bytes)
	I1213 13:16:25.136252  139657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:16:25.160868  139657 ssh_runner.go:195] Run: openssl version
	I1213 13:16:25.169220  139657 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1213 13:16:25.169344  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.182662  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1352342.pem /etc/ssl/certs/1352342.pem
	I1213 13:16:25.196346  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202552  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202645  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202700  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.211067  139657 command_runner.go:130] > 3ec20f2e
	I1213 13:16:25.211253  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:16:25.224328  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.238368  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:16:25.252003  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258273  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258311  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258360  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.266989  139657 command_runner.go:130] > b5213941
	I1213 13:16:25.267145  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:16:25.280410  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.293801  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135234.pem /etc/ssl/certs/135234.pem
	I1213 13:16:25.308024  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.313993  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.314032  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.314112  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.322512  139657 command_runner.go:130] > 51391683
	I1213 13:16:25.322716  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:16:25.335714  139657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:16:25.341584  139657 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:16:25.341629  139657 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 13:16:25.341635  139657 command_runner.go:130] > Device: 253,1	Inode: 7338073     Links: 1
	I1213 13:16:25.341641  139657 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 13:16:25.341647  139657 command_runner.go:130] > Access: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341652  139657 command_runner.go:130] > Modify: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341657  139657 command_runner.go:130] > Change: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341662  139657 command_runner.go:130] >  Birth: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341740  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:16:25.350002  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.350186  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:16:25.358329  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.358448  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:16:25.366344  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.366481  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:16:25.374941  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.375017  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:16:25.383466  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.383560  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:16:25.391728  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.391825  139657 kubeadm.go:401] StartCluster: {Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:16:25.391949  139657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:16:25.392028  139657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:16:25.432281  139657 command_runner.go:130] > f92a3a092485a0ac1dc51a2bc6f50ba873a8493104faa8027f92b47afffd326c
	I1213 13:16:25.432316  139657 command_runner.go:130] > 0f7e3e7bcf1b4fc58d523ea0a6b71f4b7f6159f908472192dec50c5c4773a6c8
	I1213 13:16:25.432327  139657 command_runner.go:130] > 82d65eddb23627c8c7b03d97ee25384a4641be44a8ce176195431ba631e420a4
	I1213 13:16:25.432337  139657 command_runner.go:130] > c035d6ae568bcf65e2e0e0ac9c8e33c9683cfa5e9962808be5bc1d7e90560b68
	I1213 13:16:25.432345  139657 command_runner.go:130] > f2e4d14cfaaeb50496758e8c7af82df0842b56679f7302760beb406f1d2377b0
	I1213 13:16:25.432364  139657 command_runner.go:130] > 5c5106dd6f44ef172d73e559df759af56aae17be00dbd7bda168113b5c87103e
	I1213 13:16:25.432372  139657 command_runner.go:130] > 9098da3bf6a16aa5aca362d77b4eefdf3d8740ee47058bac1f57462956a0ec41
	I1213 13:16:25.432382  139657 command_runner.go:130] > 032a755151e3edddee963cde3642ebab28ccd3cad4f977f5abe9be2793036fd5
	I1213 13:16:25.432392  139657 command_runner.go:130] > f8b0288ee3d2f686e17cab2f0126717e4773c0a011bf820a99b08c7146415889
	I1213 13:16:25.432405  139657 command_runner.go:130] > cb7606d3b6d8f2b73f95595faf6894b2622d71cebaf6f7aa31ae8cac07f16b57
	I1213 13:16:25.432417  139657 command_runner.go:130] > f02d47f5908b9925ba08e11c9c86ffc993d978b0210bc885a88444e31b6a2a63
	I1213 13:16:25.432448  139657 cri.go:89] found id: "f92a3a092485a0ac1dc51a2bc6f50ba873a8493104faa8027f92b47afffd326c"
	I1213 13:16:25.432463  139657 cri.go:89] found id: "0f7e3e7bcf1b4fc58d523ea0a6b71f4b7f6159f908472192dec50c5c4773a6c8"
	I1213 13:16:25.432471  139657 cri.go:89] found id: "82d65eddb23627c8c7b03d97ee25384a4641be44a8ce176195431ba631e420a4"
	I1213 13:16:25.432481  139657 cri.go:89] found id: "c035d6ae568bcf65e2e0e0ac9c8e33c9683cfa5e9962808be5bc1d7e90560b68"
	I1213 13:16:25.432487  139657 cri.go:89] found id: "f2e4d14cfaaeb50496758e8c7af82df0842b56679f7302760beb406f1d2377b0"
	I1213 13:16:25.432495  139657 cri.go:89] found id: "5c5106dd6f44ef172d73e559df759af56aae17be00dbd7bda168113b5c87103e"
	I1213 13:16:25.432501  139657 cri.go:89] found id: "9098da3bf6a16aa5aca362d77b4eefdf3d8740ee47058bac1f57462956a0ec41"
	I1213 13:16:25.432510  139657 cri.go:89] found id: "032a755151e3edddee963cde3642ebab28ccd3cad4f977f5abe9be2793036fd5"
	I1213 13:16:25.432516  139657 cri.go:89] found id: "f8b0288ee3d2f686e17cab2f0126717e4773c0a011bf820a99b08c7146415889"
	I1213 13:16:25.432528  139657 cri.go:89] found id: "cb7606d3b6d8f2b73f95595faf6894b2622d71cebaf6f7aa31ae8cac07f16b57"
	I1213 13:16:25.432537  139657 cri.go:89] found id: "f02d47f5908b9925ba08e11c9c86ffc993d978b0210bc885a88444e31b6a2a63"
	I1213 13:16:25.432544  139657 cri.go:89] found id: ""
	I1213 13:16:25.432611  139657 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-101171 -n functional-101171
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-101171 -n functional-101171: exit status 2 (213.53802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-101171" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (1473.47s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (636.72s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-101171 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-101171 get po -A: exit status 1 (61.745048ms)

                                                
                                                
** stderr ** 
	E1213 13:39:18.395960  145124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	E1213 13:39:18.396863  145124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	E1213 13:39:18.398905  145124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	E1213 13:39:18.399577  145124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	E1213 13:39:18.401383  145124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	The connection to the server 192.168.39.124:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-101171 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1213 13:39:18.395960  145124 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.124:8441/api?timeout=32s\\\": dial tcp 192.168.39.124:8441: connect: connection refused\"\nE1213 13:39:18.396863  145124 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.124:8441/api?timeout=32s\\\": dial tcp 192.168.39.124:8441: connect: connection refused\"\nE1213 13:39:18.398905  145124 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.124:8441/api?timeout=32s\\\": dial tcp 192.168.39.124:8441: connect: connection refused\"\nE1213 13:39:18.399577  145124 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.124:8441/api?timeout=32s\\\": dial tcp 192.168.39.124:8441: connect: connection refused\"\nE1213 13:39:18.401
383  145124 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.124:8441/api?timeout=32s\\\": dial tcp 192.168.39.124:8441: connect: connection refused\"\nThe connection to the server 192.168.39.124:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-101171 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-101171 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-101171 -n functional-101171
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-101171 -n functional-101171: exit status 2 (208.676335ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 logs -n 25
E1213 13:43:12.163310  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:46:15.233407  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:48:12.163473  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-101171 logs -n 25: (10m36.199490024s)
helpers_test.go:261: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ addons-685870 addons disable cloud-spanner --alsologtostderr -v=1                                                     │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	│ ip      │ addons-685870 ip                                                                                                      │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:11 UTC │
	│ addons  │ addons-685870 addons disable ingress-dns --alsologtostderr -v=1                                                       │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:11 UTC │
	│ addons  │ addons-685870 addons disable ingress --alsologtostderr -v=1                                                           │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:11 UTC │
	│ stop    │ -p addons-685870                                                                                                      │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:12 UTC │
	│ addons  │ enable dashboard -p addons-685870                                                                                     │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:12 UTC │
	│ addons  │ disable dashboard -p addons-685870                                                                                    │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:12 UTC │
	│ addons  │ disable gvisor -p addons-685870                                                                                       │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:12 UTC │
	│ delete  │ -p addons-685870                                                                                                      │ addons-685870     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:12 UTC │
	│ start   │ -p nospam-339903 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-339903 --driver=kvm2  --container-runtime=crio │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:12 UTC │ 13 Dec 25 13:13 UTC │
	│ start   │ nospam-339903 --log_dir /tmp/nospam-339903 start --dry-run                                                            │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │                     │
	│ start   │ nospam-339903 --log_dir /tmp/nospam-339903 start --dry-run                                                            │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │                     │
	│ start   │ nospam-339903 --log_dir /tmp/nospam-339903 start --dry-run                                                            │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │                     │
	│ pause   │ nospam-339903 --log_dir /tmp/nospam-339903 pause                                                                      │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ pause   │ nospam-339903 --log_dir /tmp/nospam-339903 pause                                                                      │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ pause   │ nospam-339903 --log_dir /tmp/nospam-339903 pause                                                                      │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ unpause │ nospam-339903 --log_dir /tmp/nospam-339903 unpause                                                                    │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ unpause │ nospam-339903 --log_dir /tmp/nospam-339903 unpause                                                                    │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ unpause │ nospam-339903 --log_dir /tmp/nospam-339903 unpause                                                                    │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop    │ nospam-339903 --log_dir /tmp/nospam-339903 stop                                                                       │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop    │ nospam-339903 --log_dir /tmp/nospam-339903 stop                                                                       │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop    │ nospam-339903 --log_dir /tmp/nospam-339903 stop                                                                       │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ delete  │ -p nospam-339903                                                                                                      │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ start   │ -p functional-101171 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio           │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:14 UTC │
	│ start   │ -p functional-101171 --alsologtostderr -v=8                                                                           │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:14:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:14:44.880702  139657 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:14:44.880839  139657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:14:44.880850  139657 out.go:374] Setting ErrFile to fd 2...
	I1213 13:14:44.880858  139657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:14:44.881087  139657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 13:14:44.881551  139657 out.go:368] Setting JSON to false
	I1213 13:14:44.882447  139657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3425,"bootTime":1765628260,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:14:44.882501  139657 start.go:143] virtualization: kvm guest
	I1213 13:14:44.884268  139657 out.go:179] * [functional-101171] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:14:44.885270  139657 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:14:44.885307  139657 notify.go:221] Checking for updates...
	I1213 13:14:44.887088  139657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:14:44.888140  139657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 13:14:44.889099  139657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:14:44.890102  139657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:14:44.891038  139657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:14:44.892542  139657 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:14:44.892673  139657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:14:44.927435  139657 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 13:14:44.928372  139657 start.go:309] selected driver: kvm2
	I1213 13:14:44.928386  139657 start.go:927] validating driver "kvm2" against &{Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:14:44.928499  139657 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:14:44.929402  139657 cni.go:84] Creating CNI manager for ""
	I1213 13:14:44.929464  139657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:14:44.929513  139657 start.go:353] cluster config:
	{Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:14:44.929611  139657 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:14:44.930834  139657 out.go:179] * Starting "functional-101171" primary control-plane node in "functional-101171" cluster
	I1213 13:14:44.931691  139657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:14:44.931725  139657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:14:44.931737  139657 cache.go:65] Caching tarball of preloaded images
	I1213 13:14:44.931865  139657 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:14:44.931879  139657 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:14:44.931980  139657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/config.json ...
	I1213 13:14:44.932230  139657 start.go:360] acquireMachinesLock for functional-101171: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 13:14:44.932293  139657 start.go:364] duration metric: took 38.36µs to acquireMachinesLock for "functional-101171"
	I1213 13:14:44.932313  139657 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:14:44.932324  139657 fix.go:54] fixHost starting: 
	I1213 13:14:44.933932  139657 fix.go:112] recreateIfNeeded on functional-101171: state=Running err=<nil>
	W1213 13:14:44.933963  139657 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:14:44.935205  139657 out.go:252] * Updating the running kvm2 "functional-101171" VM ...
	I1213 13:14:44.935228  139657 machine.go:94] provisionDockerMachine start ...
	I1213 13:14:44.937452  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:44.937806  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:44.937835  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:44.938001  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:44.938338  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:44.938355  139657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:14:45.046797  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101171
	
	I1213 13:14:45.046826  139657 buildroot.go:166] provisioning hostname "functional-101171"
	I1213 13:14:45.049877  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.050321  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.050355  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.050541  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.050782  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.050798  139657 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101171 && echo "functional-101171" | sudo tee /etc/hostname
	I1213 13:14:45.172748  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101171
	
	I1213 13:14:45.175509  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.175971  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.176008  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.176182  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.176385  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.176400  139657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101171' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101171/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101171' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:14:45.281039  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:14:45.281099  139657 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
	I1213 13:14:45.281128  139657 buildroot.go:174] setting up certificates
	I1213 13:14:45.281147  139657 provision.go:84] configureAuth start
	I1213 13:14:45.283949  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.284380  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.284418  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.286705  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.287058  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.287116  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.287256  139657 provision.go:143] copyHostCerts
	I1213 13:14:45.287299  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 13:14:45.287346  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem, removing ...
	I1213 13:14:45.287365  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 13:14:45.287454  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
	I1213 13:14:45.287580  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 13:14:45.287614  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem, removing ...
	I1213 13:14:45.287625  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 13:14:45.287672  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
	I1213 13:14:45.287766  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 13:14:45.287791  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem, removing ...
	I1213 13:14:45.287797  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 13:14:45.287842  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
	I1213 13:14:45.287926  139657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.functional-101171 san=[127.0.0.1 192.168.39.124 functional-101171 localhost minikube]
	I1213 13:14:45.423318  139657 provision.go:177] copyRemoteCerts
	I1213 13:14:45.423403  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:14:45.425911  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.426340  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.426370  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.426502  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:45.512848  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 13:14:45.512952  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:14:45.542724  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 13:14:45.542812  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:14:45.571677  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 13:14:45.571772  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:14:45.601284  139657 provision.go:87] duration metric: took 320.120369ms to configureAuth
	I1213 13:14:45.601314  139657 buildroot.go:189] setting minikube options for container-runtime
	I1213 13:14:45.601491  139657 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:14:45.604379  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.604741  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.604764  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.604932  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.605181  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.605200  139657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:14:51.168422  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:14:51.168457  139657 machine.go:97] duration metric: took 6.233220346s to provisionDockerMachine
	I1213 13:14:51.168486  139657 start.go:293] postStartSetup for "functional-101171" (driver="kvm2")
	I1213 13:14:51.168502  139657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:14:51.168611  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:14:51.171649  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.172012  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.172099  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.172264  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.256552  139657 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:14:51.261415  139657 command_runner.go:130] > NAME=Buildroot
	I1213 13:14:51.261442  139657 command_runner.go:130] > VERSION=2025.02-dirty
	I1213 13:14:51.261446  139657 command_runner.go:130] > ID=buildroot
	I1213 13:14:51.261450  139657 command_runner.go:130] > VERSION_ID=2025.02
	I1213 13:14:51.261455  139657 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1213 13:14:51.261540  139657 info.go:137] Remote host: Buildroot 2025.02
	I1213 13:14:51.261567  139657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
	I1213 13:14:51.261651  139657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
	I1213 13:14:51.261758  139657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> 1352342.pem in /etc/ssl/certs
	I1213 13:14:51.261772  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> /etc/ssl/certs/1352342.pem
	I1213 13:14:51.261876  139657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts -> hosts in /etc/test/nested/copy/135234
	I1213 13:14:51.261886  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts -> /etc/test/nested/copy/135234/hosts
	I1213 13:14:51.261944  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/135234
	I1213 13:14:51.275404  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 13:14:51.304392  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts --> /etc/test/nested/copy/135234/hosts (40 bytes)
	I1213 13:14:51.390782  139657 start.go:296] duration metric: took 222.277729ms for postStartSetup
	I1213 13:14:51.390831  139657 fix.go:56] duration metric: took 6.458506569s for fixHost
	I1213 13:14:51.394087  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.394507  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.394539  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.394733  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:51.395032  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:51.395048  139657 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 13:14:51.547616  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765631691.540521728
	
	I1213 13:14:51.547640  139657 fix.go:216] guest clock: 1765631691.540521728
	I1213 13:14:51.547663  139657 fix.go:229] Guest: 2025-12-13 13:14:51.540521728 +0000 UTC Remote: 2025-12-13 13:14:51.390838299 +0000 UTC m=+6.561594252 (delta=149.683429ms)
	I1213 13:14:51.547685  139657 fix.go:200] guest clock delta is within tolerance: 149.683429ms
	I1213 13:14:51.547691  139657 start.go:83] releasing machines lock for "functional-101171", held for 6.615387027s
	I1213 13:14:51.550620  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.551093  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.551134  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.551858  139657 ssh_runner.go:195] Run: cat /version.json
	I1213 13:14:51.551895  139657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:14:51.555225  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555396  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555679  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.555709  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555901  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.555915  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.555948  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.556188  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.711392  139657 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 13:14:51.711480  139657 command_runner.go:130] > {"iso_version": "v1.37.0-1765613186-22122", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "89f69959280ebeefd164cfeba1f5b84c6f004bc9"}
	I1213 13:14:51.711625  139657 ssh_runner.go:195] Run: systemctl --version
	I1213 13:14:51.721211  139657 command_runner.go:130] > systemd 256 (256.7)
	I1213 13:14:51.721261  139657 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1213 13:14:51.721342  139657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:14:51.928878  139657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 13:14:51.943312  139657 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 13:14:51.943381  139657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:14:51.943457  139657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:14:51.961133  139657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:14:51.961160  139657 start.go:496] detecting cgroup driver to use...
	I1213 13:14:51.961234  139657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:14:52.008684  139657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:14:52.058685  139657 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:14:52.058767  139657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:14:52.099652  139657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:14:52.129214  139657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:14:52.454020  139657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:14:52.731152  139657 docker.go:234] disabling docker service ...
	I1213 13:14:52.731233  139657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:14:52.789926  139657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:14:52.807635  139657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:14:53.089730  139657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:14:53.328299  139657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:14:53.351747  139657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:14:53.384802  139657 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 13:14:53.384876  139657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:14:53.385004  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.402675  139657 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 13:14:53.402773  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.425941  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.444350  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.459025  139657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:14:53.488518  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.515384  139657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.531334  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.545103  139657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:14:53.555838  139657 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 13:14:53.556273  139657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:14:53.567831  139657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:14:53.751704  139657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:16:24.195369  139657 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.443610327s)
	I1213 13:16:24.195422  139657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:16:24.195496  139657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:16:24.201208  139657 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 13:16:24.201250  139657 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 13:16:24.201260  139657 command_runner.go:130] > Device: 0,23	Inode: 1994        Links: 1
	I1213 13:16:24.201270  139657 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 13:16:24.201277  139657 command_runner.go:130] > Access: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201287  139657 command_runner.go:130] > Modify: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201293  139657 command_runner.go:130] > Change: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201298  139657 command_runner.go:130] >  Birth: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201336  139657 start.go:564] Will wait 60s for crictl version
	I1213 13:16:24.201389  139657 ssh_runner.go:195] Run: which crictl
	I1213 13:16:24.205825  139657 command_runner.go:130] > /usr/bin/crictl
	I1213 13:16:24.205969  139657 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 13:16:24.240544  139657 command_runner.go:130] > Version:  0.1.0
	I1213 13:16:24.240566  139657 command_runner.go:130] > RuntimeName:  cri-o
	I1213 13:16:24.240571  139657 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1213 13:16:24.240576  139657 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 13:16:24.240600  139657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 13:16:24.240739  139657 ssh_runner.go:195] Run: crio --version
	I1213 13:16:24.274046  139657 command_runner.go:130] > crio version 1.29.1
	I1213 13:16:24.274084  139657 command_runner.go:130] > Version:        1.29.1
	I1213 13:16:24.274090  139657 command_runner.go:130] > GitCommit:      unknown
	I1213 13:16:24.274094  139657 command_runner.go:130] > GitCommitDate:  unknown
	I1213 13:16:24.274098  139657 command_runner.go:130] > GitTreeState:   clean
	I1213 13:16:24.274104  139657 command_runner.go:130] > BuildDate:      2025-12-13T11:21:09Z
	I1213 13:16:24.274108  139657 command_runner.go:130] > GoVersion:      go1.25.5
	I1213 13:16:24.274112  139657 command_runner.go:130] > Compiler:       gc
	I1213 13:16:24.274115  139657 command_runner.go:130] > Platform:       linux/amd64
	I1213 13:16:24.274119  139657 command_runner.go:130] > Linkmode:       dynamic
	I1213 13:16:24.274126  139657 command_runner.go:130] > BuildTags:      
	I1213 13:16:24.274131  139657 command_runner.go:130] >   containers_image_ostree_stub
	I1213 13:16:24.274135  139657 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1213 13:16:24.274138  139657 command_runner.go:130] >   btrfs_noversion
	I1213 13:16:24.274143  139657 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1213 13:16:24.274150  139657 command_runner.go:130] >   libdm_no_deferred_remove
	I1213 13:16:24.274153  139657 command_runner.go:130] >   seccomp
	I1213 13:16:24.274158  139657 command_runner.go:130] > LDFlags:          unknown
	I1213 13:16:24.274162  139657 command_runner.go:130] > SeccompEnabled:   true
	I1213 13:16:24.274166  139657 command_runner.go:130] > AppArmorEnabled:  false
	I1213 13:16:24.274253  139657 ssh_runner.go:195] Run: crio --version
	I1213 13:16:24.307345  139657 command_runner.go:130] > crio version 1.29.1
	I1213 13:16:24.307372  139657 command_runner.go:130] > Version:        1.29.1
	I1213 13:16:24.307385  139657 command_runner.go:130] > GitCommit:      unknown
	I1213 13:16:24.307390  139657 command_runner.go:130] > GitCommitDate:  unknown
	I1213 13:16:24.307394  139657 command_runner.go:130] > GitTreeState:   clean
	I1213 13:16:24.307400  139657 command_runner.go:130] > BuildDate:      2025-12-13T11:21:09Z
	I1213 13:16:24.307406  139657 command_runner.go:130] > GoVersion:      go1.25.5
	I1213 13:16:24.307412  139657 command_runner.go:130] > Compiler:       gc
	I1213 13:16:24.307419  139657 command_runner.go:130] > Platform:       linux/amd64
	I1213 13:16:24.307425  139657 command_runner.go:130] > Linkmode:       dynamic
	I1213 13:16:24.307436  139657 command_runner.go:130] > BuildTags:      
	I1213 13:16:24.307444  139657 command_runner.go:130] >   containers_image_ostree_stub
	I1213 13:16:24.307453  139657 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1213 13:16:24.307458  139657 command_runner.go:130] >   btrfs_noversion
	I1213 13:16:24.307462  139657 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1213 13:16:24.307468  139657 command_runner.go:130] >   libdm_no_deferred_remove
	I1213 13:16:24.307472  139657 command_runner.go:130] >   seccomp
	I1213 13:16:24.307476  139657 command_runner.go:130] > LDFlags:          unknown
	I1213 13:16:24.307481  139657 command_runner.go:130] > SeccompEnabled:   true
	I1213 13:16:24.307484  139657 command_runner.go:130] > AppArmorEnabled:  false
	I1213 13:16:24.309954  139657 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 13:16:24.314441  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:16:24.314910  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:16:24.314934  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:16:24.315179  139657 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 13:16:24.320471  139657 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1213 13:16:24.320604  139657 kubeadm.go:884] updating cluster {Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:16:24.320792  139657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:16:24.320856  139657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:16:24.358340  139657 command_runner.go:130] > {
	I1213 13:16:24.358367  139657 command_runner.go:130] >   "images":  [
	I1213 13:16:24.358373  139657 command_runner.go:130] >     {
	I1213 13:16:24.358385  139657 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1213 13:16:24.358391  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358399  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 13:16:24.358414  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358422  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358433  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 13:16:24.358445  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1213 13:16:24.358469  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358478  139657 command_runner.go:130] >       "size":  "109379124",
	I1213 13:16:24.358484  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358497  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358504  139657 command_runner.go:130] >     },
	I1213 13:16:24.358509  139657 command_runner.go:130] >     {
	I1213 13:16:24.358519  139657 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1213 13:16:24.358529  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358538  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 13:16:24.358548  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358553  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358565  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1213 13:16:24.358580  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1213 13:16:24.358591  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358598  139657 command_runner.go:130] >       "size":  "31470524",
	I1213 13:16:24.358604  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358617  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358623  139657 command_runner.go:130] >     },
	I1213 13:16:24.358626  139657 command_runner.go:130] >     {
	I1213 13:16:24.358634  139657 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1213 13:16:24.358644  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358653  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1213 13:16:24.358661  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358668  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358685  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1213 13:16:24.358707  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1213 13:16:24.358715  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358721  139657 command_runner.go:130] >       "size":  "76103547",
	I1213 13:16:24.358731  139657 command_runner.go:130] >       "username":  "nonroot",
	I1213 13:16:24.358737  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358744  139657 command_runner.go:130] >     },
	I1213 13:16:24.358748  139657 command_runner.go:130] >     {
	I1213 13:16:24.358757  139657 command_runner.go:130] >       "id":  "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1",
	I1213 13:16:24.358770  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358779  139657 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 13:16:24.358784  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358793  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358810  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 13:16:24.358823  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"
	I1213 13:16:24.358828  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358834  139657 command_runner.go:130] >       "size":  "63585106",
	I1213 13:16:24.358840  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.358849  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.358855  139657 command_runner.go:130] >       },
	I1213 13:16:24.358875  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358883  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358889  139657 command_runner.go:130] >     },
	I1213 13:16:24.358896  139657 command_runner.go:130] >     {
	I1213 13:16:24.358905  139657 command_runner.go:130] >       "id":  "a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85",
	I1213 13:16:24.358911  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358918  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.2"
	I1213 13:16:24.358926  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358933  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358946  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077",
	I1213 13:16:24.358960  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"
	I1213 13:16:24.358967  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358974  139657 command_runner.go:130] >       "size":  "89046001",
	I1213 13:16:24.358982  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.358987  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.358995  139657 command_runner.go:130] >       },
	I1213 13:16:24.359001  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359010  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359016  139657 command_runner.go:130] >     },
	I1213 13:16:24.359025  139657 command_runner.go:130] >     {
	I1213 13:16:24.359035  139657 command_runner.go:130] >       "id":  "01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8",
	I1213 13:16:24.359045  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359060  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.2"
	I1213 13:16:24.359103  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359117  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359130  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb",
	I1213 13:16:24.359145  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"
	I1213 13:16:24.359151  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359158  139657 command_runner.go:130] >       "size":  "76004183",
	I1213 13:16:24.359164  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359169  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.359177  139657 command_runner.go:130] >       },
	I1213 13:16:24.359182  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359190  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359196  139657 command_runner.go:130] >     },
	I1213 13:16:24.359201  139657 command_runner.go:130] >     {
	I1213 13:16:24.359218  139657 command_runner.go:130] >       "id":  "8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45",
	I1213 13:16:24.359228  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359235  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.2"
	I1213 13:16:24.359243  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359251  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359266  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74",
	I1213 13:16:24.359281  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"
	I1213 13:16:24.359291  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359298  139657 command_runner.go:130] >       "size":  "73145240",
	I1213 13:16:24.359307  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359314  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359323  139657 command_runner.go:130] >     },
	I1213 13:16:24.359328  139657 command_runner.go:130] >     {
	I1213 13:16:24.359338  139657 command_runner.go:130] >       "id":  "88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952",
	I1213 13:16:24.359344  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359350  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.2"
	I1213 13:16:24.359355  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359359  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359366  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6",
	I1213 13:16:24.359407  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"
	I1213 13:16:24.359414  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359418  139657 command_runner.go:130] >       "size":  "53848919",
	I1213 13:16:24.359422  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359425  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.359428  139657 command_runner.go:130] >       },
	I1213 13:16:24.359432  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359439  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359442  139657 command_runner.go:130] >     },
	I1213 13:16:24.359445  139657 command_runner.go:130] >     {
	I1213 13:16:24.359453  139657 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1213 13:16:24.359457  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359463  139657 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.359466  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359470  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359478  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 13:16:24.359485  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1213 13:16:24.359490  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359494  139657 command_runner.go:130] >       "size":  "742092",
	I1213 13:16:24.359497  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359501  139657 command_runner.go:130] >         "value":  "65535"
	I1213 13:16:24.359506  139657 command_runner.go:130] >       },
	I1213 13:16:24.359510  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359514  139657 command_runner.go:130] >       "pinned":  true
	I1213 13:16:24.359519  139657 command_runner.go:130] >     }
	I1213 13:16:24.359522  139657 command_runner.go:130] >   ]
	I1213 13:16:24.359525  139657 command_runner.go:130] > }
	I1213 13:16:24.360333  139657 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:16:24.360355  139657 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:16:24.360418  139657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:16:24.392193  139657 command_runner.go:130] > {
	I1213 13:16:24.392217  139657 command_runner.go:130] >   "images":  [
	I1213 13:16:24.392221  139657 command_runner.go:130] >     {
	I1213 13:16:24.392229  139657 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1213 13:16:24.392236  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392246  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 13:16:24.392257  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392268  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392284  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 13:16:24.392297  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1213 13:16:24.392305  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392314  139657 command_runner.go:130] >       "size":  "109379124",
	I1213 13:16:24.392328  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392335  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392339  139657 command_runner.go:130] >     },
	I1213 13:16:24.392344  139657 command_runner.go:130] >     {
	I1213 13:16:24.392351  139657 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1213 13:16:24.392357  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392364  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 13:16:24.392372  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392379  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392393  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1213 13:16:24.392409  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1213 13:16:24.392417  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392423  139657 command_runner.go:130] >       "size":  "31470524",
	I1213 13:16:24.392430  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392438  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392443  139657 command_runner.go:130] >     },
	I1213 13:16:24.392447  139657 command_runner.go:130] >     {
	I1213 13:16:24.392456  139657 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1213 13:16:24.392462  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392467  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1213 13:16:24.392472  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392478  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392492  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1213 13:16:24.392507  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1213 13:16:24.392518  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392527  139657 command_runner.go:130] >       "size":  "76103547",
	I1213 13:16:24.392537  139657 command_runner.go:130] >       "username":  "nonroot",
	I1213 13:16:24.392545  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392548  139657 command_runner.go:130] >     },
	I1213 13:16:24.392551  139657 command_runner.go:130] >     {
	I1213 13:16:24.392557  139657 command_runner.go:130] >       "id":  "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1",
	I1213 13:16:24.392564  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392579  139657 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 13:16:24.392592  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392603  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392617  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 13:16:24.392633  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"
	I1213 13:16:24.392645  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392654  139657 command_runner.go:130] >       "size":  "63585106",
	I1213 13:16:24.392663  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392673  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392679  139657 command_runner.go:130] >       },
	I1213 13:16:24.392690  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392698  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392706  139657 command_runner.go:130] >     },
	I1213 13:16:24.392712  139657 command_runner.go:130] >     {
	I1213 13:16:24.392724  139657 command_runner.go:130] >       "id":  "a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85",
	I1213 13:16:24.392734  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392746  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.2"
	I1213 13:16:24.392754  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392761  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392775  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077",
	I1213 13:16:24.392788  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"
	I1213 13:16:24.392794  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392800  139657 command_runner.go:130] >       "size":  "89046001",
	I1213 13:16:24.392808  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392818  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392826  139657 command_runner.go:130] >       },
	I1213 13:16:24.392833  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392843  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392852  139657 command_runner.go:130] >     },
	I1213 13:16:24.392856  139657 command_runner.go:130] >     {
	I1213 13:16:24.392868  139657 command_runner.go:130] >       "id":  "01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8",
	I1213 13:16:24.392876  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392888  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.2"
	I1213 13:16:24.392895  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392909  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392924  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb",
	I1213 13:16:24.392940  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"
	I1213 13:16:24.392949  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392959  139657 command_runner.go:130] >       "size":  "76004183",
	I1213 13:16:24.392967  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392977  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392985  139657 command_runner.go:130] >       },
	I1213 13:16:24.392992  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393001  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393007  139657 command_runner.go:130] >     },
	I1213 13:16:24.393011  139657 command_runner.go:130] >     {
	I1213 13:16:24.393021  139657 command_runner.go:130] >       "id":  "8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45",
	I1213 13:16:24.393031  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393042  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.2"
	I1213 13:16:24.393048  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393058  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393089  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74",
	I1213 13:16:24.393113  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"
	I1213 13:16:24.393119  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393123  139657 command_runner.go:130] >       "size":  "73145240",
	I1213 13:16:24.393133  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393140  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393145  139657 command_runner.go:130] >     },
	I1213 13:16:24.393150  139657 command_runner.go:130] >     {
	I1213 13:16:24.393160  139657 command_runner.go:130] >       "id":  "88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952",
	I1213 13:16:24.393167  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393174  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.2"
	I1213 13:16:24.393179  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393186  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393197  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6",
	I1213 13:16:24.393226  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"
	I1213 13:16:24.393232  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393246  139657 command_runner.go:130] >       "size":  "53848919",
	I1213 13:16:24.393251  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.393257  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.393262  139657 command_runner.go:130] >       },
	I1213 13:16:24.393267  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393274  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393281  139657 command_runner.go:130] >     },
	I1213 13:16:24.393286  139657 command_runner.go:130] >     {
	I1213 13:16:24.393296  139657 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1213 13:16:24.393300  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393305  139657 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.393311  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393319  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393333  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 13:16:24.393349  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1213 13:16:24.393357  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393367  139657 command_runner.go:130] >       "size":  "742092",
	I1213 13:16:24.393376  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.393383  139657 command_runner.go:130] >         "value":  "65535"
	I1213 13:16:24.393390  139657 command_runner.go:130] >       },
	I1213 13:16:24.393396  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393405  139657 command_runner.go:130] >       "pinned":  true
	I1213 13:16:24.393408  139657 command_runner.go:130] >     }
	I1213 13:16:24.393416  139657 command_runner.go:130] >   ]
	I1213 13:16:24.393422  139657 command_runner.go:130] > }
	I1213 13:16:24.393572  139657 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:16:24.393595  139657 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:16:24.393606  139657 kubeadm.go:935] updating node { 192.168.39.124 8441 v1.34.2 crio true true} ...
	I1213 13:16:24.393771  139657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101171 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:16:24.393855  139657 ssh_runner.go:195] Run: crio config
	I1213 13:16:24.427284  139657 command_runner.go:130] ! time="2025-12-13 13:16:24.422256723Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1213 13:16:24.433797  139657 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 13:16:24.439545  139657 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 13:16:24.439572  139657 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 13:16:24.439581  139657 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 13:16:24.439585  139657 command_runner.go:130] > #
	I1213 13:16:24.439594  139657 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 13:16:24.439602  139657 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 13:16:24.439611  139657 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 13:16:24.439629  139657 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 13:16:24.439638  139657 command_runner.go:130] > # reload'.
	I1213 13:16:24.439648  139657 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 13:16:24.439661  139657 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 13:16:24.439675  139657 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 13:16:24.439687  139657 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 13:16:24.439693  139657 command_runner.go:130] > [crio]
	I1213 13:16:24.439704  139657 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 13:16:24.439712  139657 command_runner.go:130] > # containers images, in this directory.
	I1213 13:16:24.439720  139657 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1213 13:16:24.439738  139657 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 13:16:24.439749  139657 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1213 13:16:24.439761  139657 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 13:16:24.439771  139657 command_runner.go:130] > # imagestore = ""
	I1213 13:16:24.439781  139657 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 13:16:24.439794  139657 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 13:16:24.439803  139657 command_runner.go:130] > # storage_driver = "overlay"
	I1213 13:16:24.439813  139657 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 13:16:24.439825  139657 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 13:16:24.439832  139657 command_runner.go:130] > storage_option = [
	I1213 13:16:24.439844  139657 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1213 13:16:24.439852  139657 command_runner.go:130] > ]
	I1213 13:16:24.439861  139657 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 13:16:24.439872  139657 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 13:16:24.439882  139657 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 13:16:24.439891  139657 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 13:16:24.439911  139657 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 13:16:24.439921  139657 command_runner.go:130] > # always happen on a node reboot
	I1213 13:16:24.439930  139657 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 13:16:24.439952  139657 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 13:16:24.439965  139657 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 13:16:24.439979  139657 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 13:16:24.439990  139657 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1213 13:16:24.440002  139657 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 13:16:24.440018  139657 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 13:16:24.440026  139657 command_runner.go:130] > # internal_wipe = true
	I1213 13:16:24.440039  139657 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 13:16:24.440051  139657 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 13:16:24.440059  139657 command_runner.go:130] > # internal_repair = false
	I1213 13:16:24.440068  139657 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 13:16:24.440095  139657 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 13:16:24.440115  139657 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 13:16:24.440127  139657 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 13:16:24.440141  139657 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 13:16:24.440150  139657 command_runner.go:130] > [crio.api]
	I1213 13:16:24.440158  139657 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 13:16:24.440169  139657 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 13:16:24.440178  139657 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 13:16:24.440188  139657 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 13:16:24.440198  139657 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 13:16:24.440210  139657 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 13:16:24.440217  139657 command_runner.go:130] > # stream_port = "0"
	I1213 13:16:24.440227  139657 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 13:16:24.440235  139657 command_runner.go:130] > # stream_enable_tls = false
	I1213 13:16:24.440245  139657 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 13:16:24.440256  139657 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 13:16:24.440267  139657 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 13:16:24.440289  139657 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1213 13:16:24.440298  139657 command_runner.go:130] > # minutes.
	I1213 13:16:24.440313  139657 command_runner.go:130] > # stream_tls_cert = ""
	I1213 13:16:24.440341  139657 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 13:16:24.440355  139657 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1213 13:16:24.440363  139657 command_runner.go:130] > # stream_tls_key = ""
	I1213 13:16:24.440375  139657 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 13:16:24.440386  139657 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 13:16:24.440416  139657 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1213 13:16:24.440425  139657 command_runner.go:130] > # stream_tls_ca = ""
	I1213 13:16:24.440437  139657 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 13:16:24.440447  139657 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1213 13:16:24.440460  139657 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 13:16:24.440470  139657 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1213 13:16:24.440480  139657 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 13:16:24.440492  139657 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 13:16:24.440498  139657 command_runner.go:130] > [crio.runtime]
	I1213 13:16:24.440510  139657 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 13:16:24.440519  139657 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 13:16:24.440528  139657 command_runner.go:130] > # "nofile=1024:2048"
	I1213 13:16:24.440538  139657 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 13:16:24.440547  139657 command_runner.go:130] > # default_ulimits = [
	I1213 13:16:24.440553  139657 command_runner.go:130] > # ]
	I1213 13:16:24.440565  139657 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 13:16:24.440572  139657 command_runner.go:130] > # no_pivot = false
	I1213 13:16:24.440582  139657 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 13:16:24.440592  139657 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 13:16:24.440603  139657 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 13:16:24.440612  139657 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 13:16:24.440623  139657 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 13:16:24.440635  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 13:16:24.440644  139657 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1213 13:16:24.440652  139657 command_runner.go:130] > # Cgroup setting for conmon
	I1213 13:16:24.440664  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 13:16:24.440672  139657 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 13:16:24.440690  139657 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 13:16:24.440701  139657 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 13:16:24.440713  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 13:16:24.440726  139657 command_runner.go:130] > conmon_env = [
	I1213 13:16:24.440736  139657 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1213 13:16:24.440743  139657 command_runner.go:130] > ]
	I1213 13:16:24.440753  139657 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 13:16:24.440764  139657 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 13:16:24.440774  139657 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 13:16:24.440783  139657 command_runner.go:130] > # default_env = [
	I1213 13:16:24.440788  139657 command_runner.go:130] > # ]
	I1213 13:16:24.440801  139657 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 13:16:24.440813  139657 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 13:16:24.440822  139657 command_runner.go:130] > # selinux = false
	I1213 13:16:24.440831  139657 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 13:16:24.440844  139657 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1213 13:16:24.440853  139657 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1213 13:16:24.440860  139657 command_runner.go:130] > # seccomp_profile = ""
	I1213 13:16:24.440868  139657 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1213 13:16:24.440877  139657 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1213 13:16:24.440888  139657 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1213 13:16:24.440896  139657 command_runner.go:130] > # which might increase security.
	I1213 13:16:24.440904  139657 command_runner.go:130] > # This option is currently deprecated,
	I1213 13:16:24.440914  139657 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1213 13:16:24.440925  139657 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1213 13:16:24.440935  139657 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 13:16:24.440949  139657 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 13:16:24.440961  139657 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 13:16:24.440972  139657 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 13:16:24.440982  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.440989  139657 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 13:16:24.441001  139657 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 13:16:24.441008  139657 command_runner.go:130] > # the cgroup blockio controller.
	I1213 13:16:24.441025  139657 command_runner.go:130] > # blockio_config_file = ""
	I1213 13:16:24.441040  139657 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 13:16:24.441047  139657 command_runner.go:130] > # blockio parameters.
	I1213 13:16:24.441054  139657 command_runner.go:130] > # blockio_reload = false
	I1213 13:16:24.441065  139657 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 13:16:24.441088  139657 command_runner.go:130] > # irqbalance daemon.
	I1213 13:16:24.441100  139657 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 13:16:24.441116  139657 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 13:16:24.441138  139657 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 13:16:24.441152  139657 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 13:16:24.441171  139657 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 13:16:24.441183  139657 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 13:16:24.441194  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.441201  139657 command_runner.go:130] > # rdt_config_file = ""
	I1213 13:16:24.441210  139657 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 13:16:24.441217  139657 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 13:16:24.441272  139657 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 13:16:24.441283  139657 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 13:16:24.441291  139657 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 13:16:24.441300  139657 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 13:16:24.441306  139657 command_runner.go:130] > # will be added.
	I1213 13:16:24.441314  139657 command_runner.go:130] > # default_capabilities = [
	I1213 13:16:24.441320  139657 command_runner.go:130] > # 	"CHOWN",
	I1213 13:16:24.441328  139657 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 13:16:24.441334  139657 command_runner.go:130] > # 	"FSETID",
	I1213 13:16:24.441341  139657 command_runner.go:130] > # 	"FOWNER",
	I1213 13:16:24.441347  139657 command_runner.go:130] > # 	"SETGID",
	I1213 13:16:24.441355  139657 command_runner.go:130] > # 	"SETUID",
	I1213 13:16:24.441361  139657 command_runner.go:130] > # 	"SETPCAP",
	I1213 13:16:24.441368  139657 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 13:16:24.441375  139657 command_runner.go:130] > # 	"KILL",
	I1213 13:16:24.441381  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441394  139657 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 13:16:24.441414  139657 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 13:16:24.441425  139657 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 13:16:24.441436  139657 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 13:16:24.441449  139657 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 13:16:24.441457  139657 command_runner.go:130] > default_sysctls = [
	I1213 13:16:24.441465  139657 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 13:16:24.441471  139657 command_runner.go:130] > ]
	I1213 13:16:24.441479  139657 command_runner.go:130] > # List of devices on the host that a
	I1213 13:16:24.441492  139657 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 13:16:24.441499  139657 command_runner.go:130] > # allowed_devices = [
	I1213 13:16:24.441514  139657 command_runner.go:130] > # 	"/dev/fuse",
	I1213 13:16:24.441521  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441529  139657 command_runner.go:130] > # List of additional devices. specified as
	I1213 13:16:24.441544  139657 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 13:16:24.441554  139657 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 13:16:24.441563  139657 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 13:16:24.441577  139657 command_runner.go:130] > # additional_devices = [
	I1213 13:16:24.441583  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441592  139657 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 13:16:24.441599  139657 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 13:16:24.441606  139657 command_runner.go:130] > # 	"/etc/cdi",
	I1213 13:16:24.441615  139657 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 13:16:24.441620  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441631  139657 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 13:16:24.441644  139657 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 13:16:24.441653  139657 command_runner.go:130] > # Defaults to false.
	I1213 13:16:24.441661  139657 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 13:16:24.441674  139657 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 13:16:24.441685  139657 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 13:16:24.441694  139657 command_runner.go:130] > # hooks_dir = [
	I1213 13:16:24.441700  139657 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 13:16:24.441707  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441719  139657 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 13:16:24.441739  139657 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 13:16:24.441751  139657 command_runner.go:130] > # its default mounts from the following two files:
	I1213 13:16:24.441757  139657 command_runner.go:130] > #
	I1213 13:16:24.441770  139657 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 13:16:24.441780  139657 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 13:16:24.441791  139657 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 13:16:24.441797  139657 command_runner.go:130] > #
	I1213 13:16:24.441809  139657 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 13:16:24.441819  139657 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 13:16:24.441832  139657 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 13:16:24.441841  139657 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 13:16:24.441849  139657 command_runner.go:130] > #
	I1213 13:16:24.441856  139657 command_runner.go:130] > # default_mounts_file = ""
	I1213 13:16:24.441866  139657 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 13:16:24.441877  139657 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 13:16:24.441886  139657 command_runner.go:130] > pids_limit = 1024
	I1213 13:16:24.441896  139657 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 13:16:24.441906  139657 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 13:16:24.441917  139657 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 13:16:24.441931  139657 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 13:16:24.441941  139657 command_runner.go:130] > # log_size_max = -1
	I1213 13:16:24.441953  139657 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 13:16:24.441963  139657 command_runner.go:130] > # log_to_journald = false
	I1213 13:16:24.441977  139657 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 13:16:24.441987  139657 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 13:16:24.441995  139657 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 13:16:24.442006  139657 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 13:16:24.442015  139657 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 13:16:24.442024  139657 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 13:16:24.442034  139657 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 13:16:24.442042  139657 command_runner.go:130] > # read_only = false
	I1213 13:16:24.442052  139657 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 13:16:24.442065  139657 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 13:16:24.442093  139657 command_runner.go:130] > # live configuration reload.
	I1213 13:16:24.442101  139657 command_runner.go:130] > # log_level = "info"
	I1213 13:16:24.442120  139657 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 13:16:24.442131  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.442139  139657 command_runner.go:130] > # log_filter = ""
	I1213 13:16:24.442149  139657 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 13:16:24.442163  139657 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 13:16:24.442172  139657 command_runner.go:130] > # separated by comma.
	I1213 13:16:24.442185  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442194  139657 command_runner.go:130] > # uid_mappings = ""
	I1213 13:16:24.442205  139657 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 13:16:24.442218  139657 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 13:16:24.442227  139657 command_runner.go:130] > # separated by comma.
	I1213 13:16:24.442244  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442254  139657 command_runner.go:130] > # gid_mappings = ""
	I1213 13:16:24.442264  139657 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 13:16:24.442277  139657 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 13:16:24.442289  139657 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 13:16:24.442302  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442310  139657 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 13:16:24.442320  139657 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 13:16:24.442333  139657 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 13:16:24.442344  139657 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 13:16:24.442357  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442364  139657 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 13:16:24.442373  139657 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 13:16:24.442391  139657 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 13:16:24.442402  139657 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 13:16:24.442409  139657 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 13:16:24.442419  139657 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 13:16:24.442430  139657 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 13:16:24.442441  139657 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 13:16:24.442450  139657 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 13:16:24.442467  139657 command_runner.go:130] > drop_infra_ctr = false
	I1213 13:16:24.442479  139657 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 13:16:24.442489  139657 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 13:16:24.442503  139657 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 13:16:24.442510  139657 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 13:16:24.442523  139657 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 13:16:24.442534  139657 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 13:16:24.442546  139657 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 13:16:24.442554  139657 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 13:16:24.442563  139657 command_runner.go:130] > # shared_cpuset = ""
	I1213 13:16:24.442572  139657 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 13:16:24.442581  139657 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 13:16:24.442589  139657 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 13:16:24.442601  139657 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 13:16:24.442608  139657 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1213 13:16:24.442618  139657 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 13:16:24.442631  139657 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 13:16:24.442640  139657 command_runner.go:130] > # enable_criu_support = false
	I1213 13:16:24.442650  139657 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 13:16:24.442660  139657 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 13:16:24.442667  139657 command_runner.go:130] > # enable_pod_events = false
	I1213 13:16:24.442677  139657 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 13:16:24.442688  139657 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 13:16:24.442699  139657 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 13:16:24.442706  139657 command_runner.go:130] > # default_runtime = "runc"
	I1213 13:16:24.442715  139657 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 13:16:24.442726  139657 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 13:16:24.442741  139657 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 13:16:24.442756  139657 command_runner.go:130] > # creation as a file is not desired either.
	I1213 13:16:24.442774  139657 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 13:16:24.442784  139657 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 13:16:24.442792  139657 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 13:16:24.442797  139657 command_runner.go:130] > # ]
	I1213 13:16:24.442815  139657 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 13:16:24.442828  139657 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 13:16:24.442840  139657 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 13:16:24.442851  139657 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 13:16:24.442856  139657 command_runner.go:130] > #
	I1213 13:16:24.442865  139657 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 13:16:24.442873  139657 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 13:16:24.442881  139657 command_runner.go:130] > # runtime_type = "oci"
	I1213 13:16:24.442949  139657 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 13:16:24.442960  139657 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 13:16:24.442967  139657 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 13:16:24.442973  139657 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 13:16:24.442978  139657 command_runner.go:130] > # monitor_env = []
	I1213 13:16:24.442986  139657 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 13:16:24.442993  139657 command_runner.go:130] > # allowed_annotations = []
	I1213 13:16:24.443003  139657 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 13:16:24.443012  139657 command_runner.go:130] > # Where:
	I1213 13:16:24.443020  139657 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 13:16:24.443031  139657 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 13:16:24.443049  139657 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 13:16:24.443061  139657 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 13:16:24.443080  139657 command_runner.go:130] > #   in $PATH.
	I1213 13:16:24.443104  139657 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 13:16:24.443121  139657 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 13:16:24.443132  139657 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 13:16:24.443140  139657 command_runner.go:130] > #   state.
	I1213 13:16:24.443151  139657 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 13:16:24.443162  139657 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 13:16:24.443173  139657 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 13:16:24.443185  139657 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 13:16:24.443195  139657 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 13:16:24.443209  139657 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 13:16:24.443220  139657 command_runner.go:130] > #   The currently recognized values are:
	I1213 13:16:24.443242  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 13:16:24.443258  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 13:16:24.443270  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 13:16:24.443280  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 13:16:24.443293  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 13:16:24.443305  139657 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 13:16:24.443319  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 13:16:24.443332  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 13:16:24.443342  139657 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 13:16:24.443354  139657 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 13:16:24.443362  139657 command_runner.go:130] > #   deprecated option "conmon".
	I1213 13:16:24.443374  139657 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 13:16:24.443385  139657 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 13:16:24.443397  139657 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 13:16:24.443407  139657 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 13:16:24.443418  139657 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1213 13:16:24.443429  139657 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 13:16:24.443440  139657 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 13:16:24.443452  139657 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 13:16:24.443457  139657 command_runner.go:130] > #
	I1213 13:16:24.443467  139657 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 13:16:24.443473  139657 command_runner.go:130] > #
	I1213 13:16:24.443482  139657 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 13:16:24.443496  139657 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 13:16:24.443504  139657 command_runner.go:130] > #
	I1213 13:16:24.443514  139657 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 13:16:24.443525  139657 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 13:16:24.443533  139657 command_runner.go:130] > #
	I1213 13:16:24.443544  139657 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 13:16:24.443550  139657 command_runner.go:130] > # feature.
	I1213 13:16:24.443555  139657 command_runner.go:130] > #
	I1213 13:16:24.443567  139657 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 13:16:24.443577  139657 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 13:16:24.443598  139657 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 13:16:24.443613  139657 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 13:16:24.443628  139657 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 13:16:24.443636  139657 command_runner.go:130] > #
	I1213 13:16:24.443646  139657 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 13:16:24.443659  139657 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 13:16:24.443667  139657 command_runner.go:130] > #
	I1213 13:16:24.443676  139657 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 13:16:24.443688  139657 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 13:16:24.443694  139657 command_runner.go:130] > #
	I1213 13:16:24.443705  139657 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 13:16:24.443718  139657 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 13:16:24.443725  139657 command_runner.go:130] > # limitation.
	I1213 13:16:24.443734  139657 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 13:16:24.443740  139657 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1213 13:16:24.443747  139657 command_runner.go:130] > runtime_type = "oci"
	I1213 13:16:24.443755  139657 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 13:16:24.443766  139657 command_runner.go:130] > runtime_config_path = ""
	I1213 13:16:24.443773  139657 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1213 13:16:24.443779  139657 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 13:16:24.443786  139657 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 13:16:24.443792  139657 command_runner.go:130] > monitor_env = [
	I1213 13:16:24.443802  139657 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1213 13:16:24.443810  139657 command_runner.go:130] > ]
	I1213 13:16:24.443818  139657 command_runner.go:130] > privileged_without_host_devices = false
	I1213 13:16:24.443830  139657 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 13:16:24.443839  139657 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 13:16:24.443849  139657 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 13:16:24.443863  139657 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 13:16:24.443876  139657 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1213 13:16:24.443887  139657 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 13:16:24.443903  139657 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 13:16:24.443918  139657 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 13:16:24.443936  139657 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 13:16:24.443950  139657 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 13:16:24.443956  139657 command_runner.go:130] > # Example:
	I1213 13:16:24.443964  139657 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 13:16:24.443971  139657 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 13:16:24.443984  139657 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 13:16:24.443994  139657 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 13:16:24.444004  139657 command_runner.go:130] > # cpuset = 0
	I1213 13:16:24.444013  139657 command_runner.go:130] > # cpushares = "0-1"
	I1213 13:16:24.444019  139657 command_runner.go:130] > # Where:
	I1213 13:16:24.444027  139657 command_runner.go:130] > # The workload name is workload-type.
	I1213 13:16:24.444038  139657 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 13:16:24.444050  139657 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 13:16:24.444060  139657 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 13:16:24.444086  139657 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 13:16:24.444097  139657 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 13:16:24.444112  139657 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 13:16:24.444127  139657 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 13:16:24.444136  139657 command_runner.go:130] > # Default value is set to true
	I1213 13:16:24.444143  139657 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 13:16:24.444152  139657 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 13:16:24.444162  139657 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 13:16:24.444170  139657 command_runner.go:130] > # Default value is set to 'false'
	I1213 13:16:24.444179  139657 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 13:16:24.444194  139657 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 13:16:24.444202  139657 command_runner.go:130] > #
	I1213 13:16:24.444212  139657 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 13:16:24.444227  139657 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1213 13:16:24.444240  139657 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1213 13:16:24.444250  139657 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1213 13:16:24.444260  139657 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1213 13:16:24.444277  139657 command_runner.go:130] > [crio.image]
	I1213 13:16:24.444290  139657 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 13:16:24.444308  139657 command_runner.go:130] > # default_transport = "docker://"
	I1213 13:16:24.444322  139657 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 13:16:24.444336  139657 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 13:16:24.444346  139657 command_runner.go:130] > # global_auth_file = ""
	I1213 13:16:24.444357  139657 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 13:16:24.444366  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.444377  139657 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.444388  139657 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 13:16:24.444401  139657 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 13:16:24.444411  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.444418  139657 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 13:16:24.444432  139657 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 13:16:24.444443  139657 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 13:16:24.444456  139657 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 13:16:24.444465  139657 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 13:16:24.444475  139657 command_runner.go:130] > # pause_command = "/pause"
	I1213 13:16:24.444485  139657 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 13:16:24.444498  139657 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 13:16:24.444510  139657 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 13:16:24.444522  139657 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 13:16:24.444533  139657 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 13:16:24.444547  139657 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 13:16:24.444555  139657 command_runner.go:130] > # pinned_images = [
	I1213 13:16:24.444560  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444570  139657 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 13:16:24.444583  139657 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 13:16:24.444593  139657 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 13:16:24.444612  139657 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 13:16:24.444624  139657 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 13:16:24.444632  139657 command_runner.go:130] > # signature_policy = ""
	I1213 13:16:24.444644  139657 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 13:16:24.444655  139657 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 13:16:24.444668  139657 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 13:16:24.444686  139657 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 13:16:24.444698  139657 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 13:16:24.444707  139657 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 13:16:24.444717  139657 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 13:16:24.444730  139657 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 13:16:24.444737  139657 command_runner.go:130] > # changing them here.
	I1213 13:16:24.444744  139657 command_runner.go:130] > # insecure_registries = [
	I1213 13:16:24.444749  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444762  139657 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 13:16:24.444771  139657 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 13:16:24.444780  139657 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 13:16:24.444788  139657 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 13:16:24.444796  139657 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 13:16:24.444807  139657 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 13:16:24.444818  139657 command_runner.go:130] > # CNI plugins.
	I1213 13:16:24.444827  139657 command_runner.go:130] > [crio.network]
	I1213 13:16:24.444837  139657 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 13:16:24.444847  139657 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 13:16:24.444854  139657 command_runner.go:130] > # cni_default_network = ""
	I1213 13:16:24.444863  139657 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 13:16:24.444871  139657 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 13:16:24.444880  139657 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 13:16:24.444887  139657 command_runner.go:130] > # plugin_dirs = [
	I1213 13:16:24.444894  139657 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 13:16:24.444898  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444913  139657 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 13:16:24.444923  139657 command_runner.go:130] > [crio.metrics]
	I1213 13:16:24.444931  139657 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 13:16:24.444941  139657 command_runner.go:130] > enable_metrics = true
	I1213 13:16:24.444949  139657 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 13:16:24.444959  139657 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 13:16:24.444971  139657 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 13:16:24.444984  139657 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 13:16:24.445004  139657 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 13:16:24.445013  139657 command_runner.go:130] > # metrics_collectors = [
	I1213 13:16:24.445020  139657 command_runner.go:130] > # 	"operations",
	I1213 13:16:24.445031  139657 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1213 13:16:24.445038  139657 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1213 13:16:24.445045  139657 command_runner.go:130] > # 	"operations_errors",
	I1213 13:16:24.445052  139657 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1213 13:16:24.445060  139657 command_runner.go:130] > # 	"image_pulls_by_name",
	I1213 13:16:24.445068  139657 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1213 13:16:24.445085  139657 command_runner.go:130] > # 	"image_pulls_failures",
	I1213 13:16:24.445092  139657 command_runner.go:130] > # 	"image_pulls_successes",
	I1213 13:16:24.445099  139657 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 13:16:24.445110  139657 command_runner.go:130] > # 	"image_layer_reuse",
	I1213 13:16:24.445121  139657 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 13:16:24.445128  139657 command_runner.go:130] > # 	"containers_oom_total",
	I1213 13:16:24.445134  139657 command_runner.go:130] > # 	"containers_oom",
	I1213 13:16:24.445141  139657 command_runner.go:130] > # 	"processes_defunct",
	I1213 13:16:24.445147  139657 command_runner.go:130] > # 	"operations_total",
	I1213 13:16:24.445155  139657 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 13:16:24.445163  139657 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 13:16:24.445170  139657 command_runner.go:130] > # 	"operations_errors_total",
	I1213 13:16:24.445178  139657 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 13:16:24.445186  139657 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 13:16:24.445194  139657 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 13:16:24.445202  139657 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 13:16:24.445210  139657 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 13:16:24.445218  139657 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 13:16:24.445231  139657 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 13:16:24.445238  139657 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 13:16:24.445244  139657 command_runner.go:130] > # ]
	I1213 13:16:24.445253  139657 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 13:16:24.445259  139657 command_runner.go:130] > # metrics_port = 9090
	I1213 13:16:24.445268  139657 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 13:16:24.445284  139657 command_runner.go:130] > # metrics_socket = ""
	I1213 13:16:24.445295  139657 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 13:16:24.445306  139657 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 13:16:24.445319  139657 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 13:16:24.445328  139657 command_runner.go:130] > # certificate on any modification event.
	I1213 13:16:24.445335  139657 command_runner.go:130] > # metrics_cert = ""
	I1213 13:16:24.445344  139657 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 13:16:24.445355  139657 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 13:16:24.445360  139657 command_runner.go:130] > # metrics_key = ""
	I1213 13:16:24.445370  139657 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 13:16:24.445379  139657 command_runner.go:130] > [crio.tracing]
	I1213 13:16:24.445387  139657 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 13:16:24.445394  139657 command_runner.go:130] > # enable_tracing = false
	I1213 13:16:24.445403  139657 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 13:16:24.445413  139657 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1213 13:16:24.445424  139657 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 13:16:24.445435  139657 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 13:16:24.445444  139657 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 13:16:24.445450  139657 command_runner.go:130] > [crio.nri]
	I1213 13:16:24.445457  139657 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 13:16:24.445465  139657 command_runner.go:130] > # enable_nri = false
	I1213 13:16:24.445471  139657 command_runner.go:130] > # NRI socket to listen on.
	I1213 13:16:24.445479  139657 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 13:16:24.445490  139657 command_runner.go:130] > # NRI plugin directory to use.
	I1213 13:16:24.445498  139657 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 13:16:24.445509  139657 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 13:16:24.445518  139657 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 13:16:24.445528  139657 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 13:16:24.445539  139657 command_runner.go:130] > # nri_disable_connections = false
	I1213 13:16:24.445548  139657 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 13:16:24.445556  139657 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 13:16:24.445564  139657 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 13:16:24.445572  139657 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 13:16:24.445606  139657 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 13:16:24.445616  139657 command_runner.go:130] > [crio.stats]
	I1213 13:16:24.445625  139657 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 13:16:24.445640  139657 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 13:16:24.445648  139657 command_runner.go:130] > # stats_collection_period = 0
	I1213 13:16:24.445769  139657 cni.go:84] Creating CNI manager for ""
	I1213 13:16:24.445787  139657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:16:24.445812  139657 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:16:24.445847  139657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.124 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101171 NodeName:functional-101171 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:16:24.446054  139657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.124
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-101171"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.124"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.124"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:16:24.446191  139657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:16:24.458394  139657 command_runner.go:130] > kubeadm
	I1213 13:16:24.458424  139657 command_runner.go:130] > kubectl
	I1213 13:16:24.458446  139657 command_runner.go:130] > kubelet
	I1213 13:16:24.458789  139657 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:16:24.458853  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:16:24.471347  139657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1213 13:16:24.493805  139657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:16:24.515984  139657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1213 13:16:24.538444  139657 ssh_runner.go:195] Run: grep 192.168.39.124	control-plane.minikube.internal$ /etc/hosts
	I1213 13:16:24.543369  139657 command_runner.go:130] > 192.168.39.124	control-plane.minikube.internal
	I1213 13:16:24.543465  139657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:16:24.727714  139657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:16:24.748340  139657 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171 for IP: 192.168.39.124
	I1213 13:16:24.748371  139657 certs.go:195] generating shared ca certs ...
	I1213 13:16:24.748391  139657 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:16:24.748616  139657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
	I1213 13:16:24.748684  139657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
	I1213 13:16:24.748697  139657 certs.go:257] generating profile certs ...
	I1213 13:16:24.748799  139657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/client.key
	I1213 13:16:24.748886  139657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key.194f038f
	I1213 13:16:24.748927  139657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key
	I1213 13:16:24.748940  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 13:16:24.748961  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 13:16:24.748976  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 13:16:24.748999  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 13:16:24.749016  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 13:16:24.749031  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 13:16:24.749046  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 13:16:24.749066  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 13:16:24.749158  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem (1338 bytes)
	W1213 13:16:24.749196  139657 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234_empty.pem, impossibly tiny 0 bytes
	I1213 13:16:24.749208  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:16:24.749236  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:16:24.749267  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:16:24.749300  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
	I1213 13:16:24.749360  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 13:16:24.749402  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:24.749419  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem -> /usr/share/ca-certificates/135234.pem
	I1213 13:16:24.749434  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> /usr/share/ca-certificates/1352342.pem
	I1213 13:16:24.750215  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:16:24.784325  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 13:16:24.817785  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:16:24.853144  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 13:16:24.890536  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:16:24.926567  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:16:24.962010  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:16:24.998369  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:16:25.032230  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:16:25.068964  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem --> /usr/share/ca-certificates/135234.pem (1338 bytes)
	I1213 13:16:25.102766  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /usr/share/ca-certificates/1352342.pem (1708 bytes)
	I1213 13:16:25.136252  139657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:16:25.160868  139657 ssh_runner.go:195] Run: openssl version
	I1213 13:16:25.169220  139657 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1213 13:16:25.169344  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.182662  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1352342.pem /etc/ssl/certs/1352342.pem
	I1213 13:16:25.196346  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202552  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202645  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202700  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.211067  139657 command_runner.go:130] > 3ec20f2e
	I1213 13:16:25.211253  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:16:25.224328  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.238368  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:16:25.252003  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258273  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258311  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258360  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.266989  139657 command_runner.go:130] > b5213941
	I1213 13:16:25.267145  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:16:25.280410  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.293801  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135234.pem /etc/ssl/certs/135234.pem
	I1213 13:16:25.308024  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.313993  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.314032  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.314112  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.322512  139657 command_runner.go:130] > 51391683
	I1213 13:16:25.322716  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:16:25.335714  139657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:16:25.341584  139657 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:16:25.341629  139657 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 13:16:25.341635  139657 command_runner.go:130] > Device: 253,1	Inode: 7338073     Links: 1
	I1213 13:16:25.341641  139657 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 13:16:25.341647  139657 command_runner.go:130] > Access: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341652  139657 command_runner.go:130] > Modify: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341657  139657 command_runner.go:130] > Change: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341662  139657 command_runner.go:130] >  Birth: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341740  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:16:25.350002  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.350186  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:16:25.358329  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.358448  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:16:25.366344  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.366481  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:16:25.374941  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.375017  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:16:25.383466  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.383560  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:16:25.391728  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.391825  139657 kubeadm.go:401] StartCluster: {Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:16:25.391949  139657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:16:25.392028  139657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:16:25.432281  139657 command_runner.go:130] > f92a3a092485a0ac1dc51a2bc6f50ba873a8493104faa8027f92b47afffd326c
	I1213 13:16:25.432316  139657 command_runner.go:130] > 0f7e3e7bcf1b4fc58d523ea0a6b71f4b7f6159f908472192dec50c5c4773a6c8
	I1213 13:16:25.432327  139657 command_runner.go:130] > 82d65eddb23627c8c7b03d97ee25384a4641be44a8ce176195431ba631e420a4
	I1213 13:16:25.432337  139657 command_runner.go:130] > c035d6ae568bcf65e2e0e0ac9c8e33c9683cfa5e9962808be5bc1d7e90560b68
	I1213 13:16:25.432345  139657 command_runner.go:130] > f2e4d14cfaaeb50496758e8c7af82df0842b56679f7302760beb406f1d2377b0
	I1213 13:16:25.432364  139657 command_runner.go:130] > 5c5106dd6f44ef172d73e559df759af56aae17be00dbd7bda168113b5c87103e
	I1213 13:16:25.432372  139657 command_runner.go:130] > 9098da3bf6a16aa5aca362d77b4eefdf3d8740ee47058bac1f57462956a0ec41
	I1213 13:16:25.432382  139657 command_runner.go:130] > 032a755151e3edddee963cde3642ebab28ccd3cad4f977f5abe9be2793036fd5
	I1213 13:16:25.432392  139657 command_runner.go:130] > f8b0288ee3d2f686e17cab2f0126717e4773c0a011bf820a99b08c7146415889
	I1213 13:16:25.432405  139657 command_runner.go:130] > cb7606d3b6d8f2b73f95595faf6894b2622d71cebaf6f7aa31ae8cac07f16b57
	I1213 13:16:25.432417  139657 command_runner.go:130] > f02d47f5908b9925ba08e11c9c86ffc993d978b0210bc885a88444e31b6a2a63
	I1213 13:16:25.432448  139657 cri.go:89] found id: "f92a3a092485a0ac1dc51a2bc6f50ba873a8493104faa8027f92b47afffd326c"
	I1213 13:16:25.432463  139657 cri.go:89] found id: "0f7e3e7bcf1b4fc58d523ea0a6b71f4b7f6159f908472192dec50c5c4773a6c8"
	I1213 13:16:25.432471  139657 cri.go:89] found id: "82d65eddb23627c8c7b03d97ee25384a4641be44a8ce176195431ba631e420a4"
	I1213 13:16:25.432481  139657 cri.go:89] found id: "c035d6ae568bcf65e2e0e0ac9c8e33c9683cfa5e9962808be5bc1d7e90560b68"
	I1213 13:16:25.432487  139657 cri.go:89] found id: "f2e4d14cfaaeb50496758e8c7af82df0842b56679f7302760beb406f1d2377b0"
	I1213 13:16:25.432495  139657 cri.go:89] found id: "5c5106dd6f44ef172d73e559df759af56aae17be00dbd7bda168113b5c87103e"
	I1213 13:16:25.432501  139657 cri.go:89] found id: "9098da3bf6a16aa5aca362d77b4eefdf3d8740ee47058bac1f57462956a0ec41"
	I1213 13:16:25.432510  139657 cri.go:89] found id: "032a755151e3edddee963cde3642ebab28ccd3cad4f977f5abe9be2793036fd5"
	I1213 13:16:25.432516  139657 cri.go:89] found id: "f8b0288ee3d2f686e17cab2f0126717e4773c0a011bf820a99b08c7146415889"
	I1213 13:16:25.432528  139657 cri.go:89] found id: "cb7606d3b6d8f2b73f95595faf6894b2622d71cebaf6f7aa31ae8cac07f16b57"
	I1213 13:16:25.432537  139657 cri.go:89] found id: "f02d47f5908b9925ba08e11c9c86ffc993d978b0210bc885a88444e31b6a2a63"
	I1213 13:16:25.432544  139657 cri.go:89] found id: ""
	I1213 13:16:25.432611  139657 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-101171 -n functional-101171
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-101171 -n functional-101171: exit status 2 (187.581693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-101171" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (636.72s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (639.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 kubectl -- --context functional-101171 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101171 kubectl -- --context functional-101171 get pods: exit status 1 (117.613485ms)

                                                
                                                
** stderr ** 
	E1213 13:50:02.065800  147558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	E1213 13:50:02.066324  147558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	E1213 13:50:02.067860  147558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	E1213 13:50:02.068261  147558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	E1213 13:50:02.069750  147558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.124:8441/api?timeout=32s\": dial tcp 192.168.39.124:8441: connect: connection refused"
	The connection to the server 192.168.39.124:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-101171 kubectl -- --context functional-101171 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-101171 -n functional-101171
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-101171 -n functional-101171: exit status 2 (186.82854ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 logs -n 25
E1213 13:53:12.154743  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:58:12.159828  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-101171 logs -n 25: (10m38.446229171s)
helpers_test.go:261: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-339903 --log_dir /tmp/nospam-339903 pause                                                            │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ unpause │ nospam-339903 --log_dir /tmp/nospam-339903 unpause                                                          │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ unpause │ nospam-339903 --log_dir /tmp/nospam-339903 unpause                                                          │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ unpause │ nospam-339903 --log_dir /tmp/nospam-339903 unpause                                                          │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop    │ nospam-339903 --log_dir /tmp/nospam-339903 stop                                                             │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop    │ nospam-339903 --log_dir /tmp/nospam-339903 stop                                                             │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ stop    │ nospam-339903 --log_dir /tmp/nospam-339903 stop                                                             │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ delete  │ -p nospam-339903                                                                                            │ nospam-339903     │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:13 UTC │
	│ start   │ -p functional-101171 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:13 UTC │ 13 Dec 25 13:14 UTC │
	│ start   │ -p functional-101171 --alsologtostderr -v=8                                                                 │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:14 UTC │                     │
	│ cache   │ functional-101171 cache add registry.k8s.io/pause:3.1                                                       │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:49 UTC │ 13 Dec 25 13:49 UTC │
	│ cache   │ functional-101171 cache add registry.k8s.io/pause:3.3                                                       │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:49 UTC │ 13 Dec 25 13:49 UTC │
	│ cache   │ functional-101171 cache add registry.k8s.io/pause:latest                                                    │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:49 UTC │ 13 Dec 25 13:49 UTC │
	│ cache   │ functional-101171 cache add minikube-local-cache-test:functional-101171                                     │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:49 UTC │ 13 Dec 25 13:50 UTC │
	│ cache   │ functional-101171 cache delete minikube-local-cache-test:functional-101171                                  │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │ 13 Dec 25 13:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                            │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │ 13 Dec 25 13:50 UTC │
	│ cache   │ list                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │ 13 Dec 25 13:50 UTC │
	│ ssh     │ functional-101171 ssh sudo crictl images                                                                    │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │ 13 Dec 25 13:50 UTC │
	│ ssh     │ functional-101171 ssh sudo crictl rmi registry.k8s.io/pause:latest                                          │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │ 13 Dec 25 13:50 UTC │
	│ ssh     │ functional-101171 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                     │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │                     │
	│ cache   │ functional-101171 cache reload                                                                              │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │ 13 Dec 25 13:50 UTC │
	│ ssh     │ functional-101171 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                     │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │ 13 Dec 25 13:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                            │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │ 13 Dec 25 13:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                         │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │ 13 Dec 25 13:50 UTC │
	│ kubectl │ functional-101171 kubectl -- --context functional-101171 get pods                                           │ functional-101171 │ jenkins │ v1.37.0 │ 13 Dec 25 13:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:14:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:14:44.880702  139657 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:14:44.880839  139657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:14:44.880850  139657 out.go:374] Setting ErrFile to fd 2...
	I1213 13:14:44.880858  139657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:14:44.881087  139657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 13:14:44.881551  139657 out.go:368] Setting JSON to false
	I1213 13:14:44.882447  139657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3425,"bootTime":1765628260,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:14:44.882501  139657 start.go:143] virtualization: kvm guest
	I1213 13:14:44.884268  139657 out.go:179] * [functional-101171] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:14:44.885270  139657 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:14:44.885307  139657 notify.go:221] Checking for updates...
	I1213 13:14:44.887088  139657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:14:44.888140  139657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 13:14:44.889099  139657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:14:44.890102  139657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:14:44.891038  139657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:14:44.892542  139657 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:14:44.892673  139657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:14:44.927435  139657 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 13:14:44.928372  139657 start.go:309] selected driver: kvm2
	I1213 13:14:44.928386  139657 start.go:927] validating driver "kvm2" against &{Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:14:44.928499  139657 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:14:44.929402  139657 cni.go:84] Creating CNI manager for ""
	I1213 13:14:44.929464  139657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:14:44.929513  139657 start.go:353] cluster config:
	{Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:14:44.929611  139657 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:14:44.930834  139657 out.go:179] * Starting "functional-101171" primary control-plane node in "functional-101171" cluster
	I1213 13:14:44.931691  139657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:14:44.931725  139657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:14:44.931737  139657 cache.go:65] Caching tarball of preloaded images
	I1213 13:14:44.931865  139657 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:14:44.931879  139657 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:14:44.931980  139657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/config.json ...
	I1213 13:14:44.932230  139657 start.go:360] acquireMachinesLock for functional-101171: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 13:14:44.932293  139657 start.go:364] duration metric: took 38.36µs to acquireMachinesLock for "functional-101171"
	I1213 13:14:44.932313  139657 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:14:44.932324  139657 fix.go:54] fixHost starting: 
	I1213 13:14:44.933932  139657 fix.go:112] recreateIfNeeded on functional-101171: state=Running err=<nil>
	W1213 13:14:44.933963  139657 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:14:44.935205  139657 out.go:252] * Updating the running kvm2 "functional-101171" VM ...
	I1213 13:14:44.935228  139657 machine.go:94] provisionDockerMachine start ...
	I1213 13:14:44.937452  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:44.937806  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:44.937835  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:44.938001  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:44.938338  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:44.938355  139657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:14:45.046797  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101171
	
	I1213 13:14:45.046826  139657 buildroot.go:166] provisioning hostname "functional-101171"
	I1213 13:14:45.049877  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.050321  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.050355  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.050541  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.050782  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.050798  139657 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101171 && echo "functional-101171" | sudo tee /etc/hostname
	I1213 13:14:45.172748  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101171
	
	I1213 13:14:45.175509  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.175971  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.176008  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.176182  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.176385  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.176400  139657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101171' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101171/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101171' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:14:45.281039  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:14:45.281099  139657 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
	I1213 13:14:45.281128  139657 buildroot.go:174] setting up certificates
	I1213 13:14:45.281147  139657 provision.go:84] configureAuth start
	I1213 13:14:45.283949  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.284380  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.284418  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.286705  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.287058  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.287116  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.287256  139657 provision.go:143] copyHostCerts
	I1213 13:14:45.287299  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 13:14:45.287346  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem, removing ...
	I1213 13:14:45.287365  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 13:14:45.287454  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
	I1213 13:14:45.287580  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 13:14:45.287614  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem, removing ...
	I1213 13:14:45.287625  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 13:14:45.287672  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
	I1213 13:14:45.287766  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 13:14:45.287791  139657 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem, removing ...
	I1213 13:14:45.287797  139657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 13:14:45.287842  139657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
	I1213 13:14:45.287926  139657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.functional-101171 san=[127.0.0.1 192.168.39.124 functional-101171 localhost minikube]
	I1213 13:14:45.423318  139657 provision.go:177] copyRemoteCerts
	I1213 13:14:45.423403  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:14:45.425911  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.426340  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.426370  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.426502  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:45.512848  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 13:14:45.512952  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:14:45.542724  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 13:14:45.542812  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:14:45.571677  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 13:14:45.571772  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:14:45.601284  139657 provision.go:87] duration metric: took 320.120369ms to configureAuth
	I1213 13:14:45.601314  139657 buildroot.go:189] setting minikube options for container-runtime
	I1213 13:14:45.601491  139657 config.go:182] Loaded profile config "functional-101171": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:14:45.604379  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.604741  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:45.604764  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:45.604932  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:45.605181  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:45.605200  139657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:14:51.168422  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:14:51.168457  139657 machine.go:97] duration metric: took 6.233220346s to provisionDockerMachine
	I1213 13:14:51.168486  139657 start.go:293] postStartSetup for "functional-101171" (driver="kvm2")
	I1213 13:14:51.168502  139657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:14:51.168611  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:14:51.171649  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.172012  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.172099  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.172264  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.256552  139657 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:14:51.261415  139657 command_runner.go:130] > NAME=Buildroot
	I1213 13:14:51.261442  139657 command_runner.go:130] > VERSION=2025.02-dirty
	I1213 13:14:51.261446  139657 command_runner.go:130] > ID=buildroot
	I1213 13:14:51.261450  139657 command_runner.go:130] > VERSION_ID=2025.02
	I1213 13:14:51.261455  139657 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1213 13:14:51.261540  139657 info.go:137] Remote host: Buildroot 2025.02
	I1213 13:14:51.261567  139657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
	I1213 13:14:51.261651  139657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
	I1213 13:14:51.261758  139657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> 1352342.pem in /etc/ssl/certs
	I1213 13:14:51.261772  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> /etc/ssl/certs/1352342.pem
	I1213 13:14:51.261876  139657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts -> hosts in /etc/test/nested/copy/135234
	I1213 13:14:51.261886  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts -> /etc/test/nested/copy/135234/hosts
	I1213 13:14:51.261944  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/135234
	I1213 13:14:51.275404  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 13:14:51.304392  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts --> /etc/test/nested/copy/135234/hosts (40 bytes)
	I1213 13:14:51.390782  139657 start.go:296] duration metric: took 222.277729ms for postStartSetup
	I1213 13:14:51.390831  139657 fix.go:56] duration metric: took 6.458506569s for fixHost
	I1213 13:14:51.394087  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.394507  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.394539  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.394733  139657 main.go:143] libmachine: Using SSH client type: native
	I1213 13:14:51.395032  139657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1213 13:14:51.395048  139657 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 13:14:51.547616  139657 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765631691.540521728
	
	I1213 13:14:51.547640  139657 fix.go:216] guest clock: 1765631691.540521728
	I1213 13:14:51.547663  139657 fix.go:229] Guest: 2025-12-13 13:14:51.540521728 +0000 UTC Remote: 2025-12-13 13:14:51.390838299 +0000 UTC m=+6.561594252 (delta=149.683429ms)
	I1213 13:14:51.547685  139657 fix.go:200] guest clock delta is within tolerance: 149.683429ms
	I1213 13:14:51.547691  139657 start.go:83] releasing machines lock for "functional-101171", held for 6.615387027s
	I1213 13:14:51.550620  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.551093  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.551134  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.551858  139657 ssh_runner.go:195] Run: cat /version.json
	I1213 13:14:51.551895  139657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:14:51.555225  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555396  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555679  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.555709  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.555901  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.555915  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:14:51.555948  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:14:51.556188  139657 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-101171/id_rsa Username:docker}
	I1213 13:14:51.711392  139657 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 13:14:51.711480  139657 command_runner.go:130] > {"iso_version": "v1.37.0-1765613186-22122", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "89f69959280ebeefd164cfeba1f5b84c6f004bc9"}
	I1213 13:14:51.711625  139657 ssh_runner.go:195] Run: systemctl --version
	I1213 13:14:51.721211  139657 command_runner.go:130] > systemd 256 (256.7)
	I1213 13:14:51.721261  139657 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1213 13:14:51.721342  139657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:14:51.928878  139657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 13:14:51.943312  139657 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 13:14:51.943381  139657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:14:51.943457  139657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:14:51.961133  139657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:14:51.961160  139657 start.go:496] detecting cgroup driver to use...
	I1213 13:14:51.961234  139657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:14:52.008684  139657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:14:52.058685  139657 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:14:52.058767  139657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:14:52.099652  139657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:14:52.129214  139657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:14:52.454020  139657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:14:52.731152  139657 docker.go:234] disabling docker service ...
	I1213 13:14:52.731233  139657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:14:52.789926  139657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:14:52.807635  139657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:14:53.089730  139657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:14:53.328299  139657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:14:53.351747  139657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:14:53.384802  139657 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 13:14:53.384876  139657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:14:53.385004  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.402675  139657 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 13:14:53.402773  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.425941  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.444350  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.459025  139657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:14:53.488518  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.515384  139657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.531334  139657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:14:53.545103  139657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:14:53.555838  139657 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 13:14:53.556273  139657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:14:53.567831  139657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:14:53.751704  139657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:16:24.195369  139657 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.443610327s)
	I1213 13:16:24.195422  139657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:16:24.195496  139657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:16:24.201208  139657 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 13:16:24.201250  139657 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 13:16:24.201260  139657 command_runner.go:130] > Device: 0,23	Inode: 1994        Links: 1
	I1213 13:16:24.201270  139657 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 13:16:24.201277  139657 command_runner.go:130] > Access: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201287  139657 command_runner.go:130] > Modify: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201293  139657 command_runner.go:130] > Change: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201298  139657 command_runner.go:130] >  Birth: 2025-12-13 13:16:24.024303909 +0000
	I1213 13:16:24.201336  139657 start.go:564] Will wait 60s for crictl version
	I1213 13:16:24.201389  139657 ssh_runner.go:195] Run: which crictl
	I1213 13:16:24.205825  139657 command_runner.go:130] > /usr/bin/crictl
	I1213 13:16:24.205969  139657 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 13:16:24.240544  139657 command_runner.go:130] > Version:  0.1.0
	I1213 13:16:24.240566  139657 command_runner.go:130] > RuntimeName:  cri-o
	I1213 13:16:24.240571  139657 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1213 13:16:24.240576  139657 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 13:16:24.240600  139657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 13:16:24.240739  139657 ssh_runner.go:195] Run: crio --version
	I1213 13:16:24.274046  139657 command_runner.go:130] > crio version 1.29.1
	I1213 13:16:24.274084  139657 command_runner.go:130] > Version:        1.29.1
	I1213 13:16:24.274090  139657 command_runner.go:130] > GitCommit:      unknown
	I1213 13:16:24.274094  139657 command_runner.go:130] > GitCommitDate:  unknown
	I1213 13:16:24.274098  139657 command_runner.go:130] > GitTreeState:   clean
	I1213 13:16:24.274104  139657 command_runner.go:130] > BuildDate:      2025-12-13T11:21:09Z
	I1213 13:16:24.274108  139657 command_runner.go:130] > GoVersion:      go1.25.5
	I1213 13:16:24.274112  139657 command_runner.go:130] > Compiler:       gc
	I1213 13:16:24.274115  139657 command_runner.go:130] > Platform:       linux/amd64
	I1213 13:16:24.274119  139657 command_runner.go:130] > Linkmode:       dynamic
	I1213 13:16:24.274126  139657 command_runner.go:130] > BuildTags:      
	I1213 13:16:24.274131  139657 command_runner.go:130] >   containers_image_ostree_stub
	I1213 13:16:24.274135  139657 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1213 13:16:24.274138  139657 command_runner.go:130] >   btrfs_noversion
	I1213 13:16:24.274143  139657 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1213 13:16:24.274150  139657 command_runner.go:130] >   libdm_no_deferred_remove
	I1213 13:16:24.274153  139657 command_runner.go:130] >   seccomp
	I1213 13:16:24.274158  139657 command_runner.go:130] > LDFlags:          unknown
	I1213 13:16:24.274162  139657 command_runner.go:130] > SeccompEnabled:   true
	I1213 13:16:24.274166  139657 command_runner.go:130] > AppArmorEnabled:  false
	I1213 13:16:24.274253  139657 ssh_runner.go:195] Run: crio --version
	I1213 13:16:24.307345  139657 command_runner.go:130] > crio version 1.29.1
	I1213 13:16:24.307372  139657 command_runner.go:130] > Version:        1.29.1
	I1213 13:16:24.307385  139657 command_runner.go:130] > GitCommit:      unknown
	I1213 13:16:24.307390  139657 command_runner.go:130] > GitCommitDate:  unknown
	I1213 13:16:24.307394  139657 command_runner.go:130] > GitTreeState:   clean
	I1213 13:16:24.307400  139657 command_runner.go:130] > BuildDate:      2025-12-13T11:21:09Z
	I1213 13:16:24.307406  139657 command_runner.go:130] > GoVersion:      go1.25.5
	I1213 13:16:24.307412  139657 command_runner.go:130] > Compiler:       gc
	I1213 13:16:24.307419  139657 command_runner.go:130] > Platform:       linux/amd64
	I1213 13:16:24.307425  139657 command_runner.go:130] > Linkmode:       dynamic
	I1213 13:16:24.307436  139657 command_runner.go:130] > BuildTags:      
	I1213 13:16:24.307444  139657 command_runner.go:130] >   containers_image_ostree_stub
	I1213 13:16:24.307453  139657 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1213 13:16:24.307458  139657 command_runner.go:130] >   btrfs_noversion
	I1213 13:16:24.307462  139657 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1213 13:16:24.307468  139657 command_runner.go:130] >   libdm_no_deferred_remove
	I1213 13:16:24.307472  139657 command_runner.go:130] >   seccomp
	I1213 13:16:24.307476  139657 command_runner.go:130] > LDFlags:          unknown
	I1213 13:16:24.307481  139657 command_runner.go:130] > SeccompEnabled:   true
	I1213 13:16:24.307484  139657 command_runner.go:130] > AppArmorEnabled:  false
	I1213 13:16:24.309954  139657 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 13:16:24.314441  139657 main.go:143] libmachine: domain functional-101171 has defined MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:16:24.314910  139657 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7c:80:d0", ip: ""} in network mk-functional-101171: {Iface:virbr1 ExpiryTime:2025-12-13 14:13:45 +0000 UTC Type:0 Mac:52:54:00:7c:80:d0 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-101171 Clientid:01:52:54:00:7c:80:d0}
	I1213 13:16:24.314934  139657 main.go:143] libmachine: domain functional-101171 has defined IP address 192.168.39.124 and MAC address 52:54:00:7c:80:d0 in network mk-functional-101171
	I1213 13:16:24.315179  139657 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 13:16:24.320471  139657 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1213 13:16:24.320604  139657 kubeadm.go:884] updating cluster {Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:16:24.320792  139657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:16:24.320856  139657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:16:24.358340  139657 command_runner.go:130] > {
	I1213 13:16:24.358367  139657 command_runner.go:130] >   "images":  [
	I1213 13:16:24.358373  139657 command_runner.go:130] >     {
	I1213 13:16:24.358385  139657 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1213 13:16:24.358391  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358399  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 13:16:24.358414  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358422  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358433  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 13:16:24.358445  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1213 13:16:24.358469  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358478  139657 command_runner.go:130] >       "size":  "109379124",
	I1213 13:16:24.358484  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358497  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358504  139657 command_runner.go:130] >     },
	I1213 13:16:24.358509  139657 command_runner.go:130] >     {
	I1213 13:16:24.358519  139657 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1213 13:16:24.358529  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358538  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 13:16:24.358548  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358553  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358565  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1213 13:16:24.358580  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1213 13:16:24.358591  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358598  139657 command_runner.go:130] >       "size":  "31470524",
	I1213 13:16:24.358604  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358617  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358623  139657 command_runner.go:130] >     },
	I1213 13:16:24.358626  139657 command_runner.go:130] >     {
	I1213 13:16:24.358634  139657 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1213 13:16:24.358644  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358653  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1213 13:16:24.358661  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358668  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358685  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1213 13:16:24.358707  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1213 13:16:24.358715  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358721  139657 command_runner.go:130] >       "size":  "76103547",
	I1213 13:16:24.358731  139657 command_runner.go:130] >       "username":  "nonroot",
	I1213 13:16:24.358737  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358744  139657 command_runner.go:130] >     },
	I1213 13:16:24.358748  139657 command_runner.go:130] >     {
	I1213 13:16:24.358757  139657 command_runner.go:130] >       "id":  "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1",
	I1213 13:16:24.358770  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358779  139657 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 13:16:24.358784  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358793  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358810  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 13:16:24.358823  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"
	I1213 13:16:24.358828  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358834  139657 command_runner.go:130] >       "size":  "63585106",
	I1213 13:16:24.358840  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.358849  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.358855  139657 command_runner.go:130] >       },
	I1213 13:16:24.358875  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.358883  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.358889  139657 command_runner.go:130] >     },
	I1213 13:16:24.358896  139657 command_runner.go:130] >     {
	I1213 13:16:24.358905  139657 command_runner.go:130] >       "id":  "a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85",
	I1213 13:16:24.358911  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.358918  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.2"
	I1213 13:16:24.358926  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358933  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.358946  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077",
	I1213 13:16:24.358960  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"
	I1213 13:16:24.358967  139657 command_runner.go:130] >       ],
	I1213 13:16:24.358974  139657 command_runner.go:130] >       "size":  "89046001",
	I1213 13:16:24.358982  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.358987  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.358995  139657 command_runner.go:130] >       },
	I1213 13:16:24.359001  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359010  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359016  139657 command_runner.go:130] >     },
	I1213 13:16:24.359025  139657 command_runner.go:130] >     {
	I1213 13:16:24.359035  139657 command_runner.go:130] >       "id":  "01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8",
	I1213 13:16:24.359045  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359060  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.2"
	I1213 13:16:24.359103  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359117  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359130  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb",
	I1213 13:16:24.359145  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"
	I1213 13:16:24.359151  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359158  139657 command_runner.go:130] >       "size":  "76004183",
	I1213 13:16:24.359164  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359169  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.359177  139657 command_runner.go:130] >       },
	I1213 13:16:24.359182  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359190  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359196  139657 command_runner.go:130] >     },
	I1213 13:16:24.359201  139657 command_runner.go:130] >     {
	I1213 13:16:24.359218  139657 command_runner.go:130] >       "id":  "8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45",
	I1213 13:16:24.359228  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359235  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.2"
	I1213 13:16:24.359243  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359251  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359266  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74",
	I1213 13:16:24.359281  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"
	I1213 13:16:24.359291  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359298  139657 command_runner.go:130] >       "size":  "73145240",
	I1213 13:16:24.359307  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359314  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359323  139657 command_runner.go:130] >     },
	I1213 13:16:24.359328  139657 command_runner.go:130] >     {
	I1213 13:16:24.359338  139657 command_runner.go:130] >       "id":  "88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952",
	I1213 13:16:24.359344  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359350  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.2"
	I1213 13:16:24.359355  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359359  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359366  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6",
	I1213 13:16:24.359407  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"
	I1213 13:16:24.359414  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359418  139657 command_runner.go:130] >       "size":  "53848919",
	I1213 13:16:24.359422  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359425  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.359428  139657 command_runner.go:130] >       },
	I1213 13:16:24.359432  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359439  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.359442  139657 command_runner.go:130] >     },
	I1213 13:16:24.359445  139657 command_runner.go:130] >     {
	I1213 13:16:24.359453  139657 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1213 13:16:24.359457  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.359463  139657 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.359466  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359470  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.359478  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 13:16:24.359485  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1213 13:16:24.359490  139657 command_runner.go:130] >       ],
	I1213 13:16:24.359494  139657 command_runner.go:130] >       "size":  "742092",
	I1213 13:16:24.359497  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.359501  139657 command_runner.go:130] >         "value":  "65535"
	I1213 13:16:24.359506  139657 command_runner.go:130] >       },
	I1213 13:16:24.359510  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.359514  139657 command_runner.go:130] >       "pinned":  true
	I1213 13:16:24.359519  139657 command_runner.go:130] >     }
	I1213 13:16:24.359522  139657 command_runner.go:130] >   ]
	I1213 13:16:24.359525  139657 command_runner.go:130] > }
	I1213 13:16:24.360333  139657 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:16:24.360355  139657 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:16:24.360418  139657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:16:24.392193  139657 command_runner.go:130] > {
	I1213 13:16:24.392217  139657 command_runner.go:130] >   "images":  [
	I1213 13:16:24.392221  139657 command_runner.go:130] >     {
	I1213 13:16:24.392229  139657 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1213 13:16:24.392236  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392246  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 13:16:24.392257  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392268  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392284  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 13:16:24.392297  139657 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1213 13:16:24.392305  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392314  139657 command_runner.go:130] >       "size":  "109379124",
	I1213 13:16:24.392328  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392335  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392339  139657 command_runner.go:130] >     },
	I1213 13:16:24.392344  139657 command_runner.go:130] >     {
	I1213 13:16:24.392351  139657 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1213 13:16:24.392357  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392364  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 13:16:24.392372  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392379  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392393  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1213 13:16:24.392409  139657 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1213 13:16:24.392417  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392423  139657 command_runner.go:130] >       "size":  "31470524",
	I1213 13:16:24.392430  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392438  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392443  139657 command_runner.go:130] >     },
	I1213 13:16:24.392447  139657 command_runner.go:130] >     {
	I1213 13:16:24.392456  139657 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1213 13:16:24.392462  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392467  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1213 13:16:24.392472  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392478  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392492  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1213 13:16:24.392507  139657 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1213 13:16:24.392518  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392527  139657 command_runner.go:130] >       "size":  "76103547",
	I1213 13:16:24.392537  139657 command_runner.go:130] >       "username":  "nonroot",
	I1213 13:16:24.392545  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392548  139657 command_runner.go:130] >     },
	I1213 13:16:24.392551  139657 command_runner.go:130] >     {
	I1213 13:16:24.392557  139657 command_runner.go:130] >       "id":  "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1",
	I1213 13:16:24.392564  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392579  139657 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 13:16:24.392592  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392603  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392617  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 13:16:24.392633  139657 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"
	I1213 13:16:24.392645  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392654  139657 command_runner.go:130] >       "size":  "63585106",
	I1213 13:16:24.392663  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392673  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392679  139657 command_runner.go:130] >       },
	I1213 13:16:24.392690  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392698  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392706  139657 command_runner.go:130] >     },
	I1213 13:16:24.392712  139657 command_runner.go:130] >     {
	I1213 13:16:24.392724  139657 command_runner.go:130] >       "id":  "a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85",
	I1213 13:16:24.392734  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392746  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.2"
	I1213 13:16:24.392754  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392761  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392775  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077",
	I1213 13:16:24.392788  139657 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"
	I1213 13:16:24.392794  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392800  139657 command_runner.go:130] >       "size":  "89046001",
	I1213 13:16:24.392808  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392818  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392826  139657 command_runner.go:130] >       },
	I1213 13:16:24.392833  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.392843  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.392852  139657 command_runner.go:130] >     },
	I1213 13:16:24.392856  139657 command_runner.go:130] >     {
	I1213 13:16:24.392868  139657 command_runner.go:130] >       "id":  "01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8",
	I1213 13:16:24.392876  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.392888  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.2"
	I1213 13:16:24.392895  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392909  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.392924  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb",
	I1213 13:16:24.392940  139657 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"
	I1213 13:16:24.392949  139657 command_runner.go:130] >       ],
	I1213 13:16:24.392959  139657 command_runner.go:130] >       "size":  "76004183",
	I1213 13:16:24.392967  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.392977  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.392985  139657 command_runner.go:130] >       },
	I1213 13:16:24.392992  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393001  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393007  139657 command_runner.go:130] >     },
	I1213 13:16:24.393011  139657 command_runner.go:130] >     {
	I1213 13:16:24.393021  139657 command_runner.go:130] >       "id":  "8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45",
	I1213 13:16:24.393031  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393042  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.2"
	I1213 13:16:24.393048  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393058  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393089  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74",
	I1213 13:16:24.393113  139657 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"
	I1213 13:16:24.393119  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393123  139657 command_runner.go:130] >       "size":  "73145240",
	I1213 13:16:24.393133  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393140  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393145  139657 command_runner.go:130] >     },
	I1213 13:16:24.393150  139657 command_runner.go:130] >     {
	I1213 13:16:24.393160  139657 command_runner.go:130] >       "id":  "88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952",
	I1213 13:16:24.393167  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393174  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.2"
	I1213 13:16:24.393179  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393186  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393197  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6",
	I1213 13:16:24.393226  139657 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"
	I1213 13:16:24.393232  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393246  139657 command_runner.go:130] >       "size":  "53848919",
	I1213 13:16:24.393251  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.393257  139657 command_runner.go:130] >         "value":  "0"
	I1213 13:16:24.393262  139657 command_runner.go:130] >       },
	I1213 13:16:24.393267  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393274  139657 command_runner.go:130] >       "pinned":  false
	I1213 13:16:24.393281  139657 command_runner.go:130] >     },
	I1213 13:16:24.393286  139657 command_runner.go:130] >     {
	I1213 13:16:24.393296  139657 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1213 13:16:24.393300  139657 command_runner.go:130] >       "repoTags":  [
	I1213 13:16:24.393305  139657 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.393311  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393319  139657 command_runner.go:130] >       "repoDigests":  [
	I1213 13:16:24.393333  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 13:16:24.393349  139657 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1213 13:16:24.393357  139657 command_runner.go:130] >       ],
	I1213 13:16:24.393367  139657 command_runner.go:130] >       "size":  "742092",
	I1213 13:16:24.393376  139657 command_runner.go:130] >       "uid":  {
	I1213 13:16:24.393383  139657 command_runner.go:130] >         "value":  "65535"
	I1213 13:16:24.393390  139657 command_runner.go:130] >       },
	I1213 13:16:24.393396  139657 command_runner.go:130] >       "username":  "",
	I1213 13:16:24.393405  139657 command_runner.go:130] >       "pinned":  true
	I1213 13:16:24.393408  139657 command_runner.go:130] >     }
	I1213 13:16:24.393416  139657 command_runner.go:130] >   ]
	I1213 13:16:24.393422  139657 command_runner.go:130] > }
	I1213 13:16:24.393572  139657 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:16:24.393595  139657 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:16:24.393606  139657 kubeadm.go:935] updating node { 192.168.39.124 8441 v1.34.2 crio true true} ...
	I1213 13:16:24.393771  139657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101171 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:16:24.393855  139657 ssh_runner.go:195] Run: crio config
	I1213 13:16:24.427284  139657 command_runner.go:130] ! time="2025-12-13 13:16:24.422256723Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1213 13:16:24.433797  139657 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 13:16:24.439545  139657 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 13:16:24.439572  139657 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 13:16:24.439581  139657 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 13:16:24.439585  139657 command_runner.go:130] > #
	I1213 13:16:24.439594  139657 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 13:16:24.439602  139657 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 13:16:24.439611  139657 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 13:16:24.439629  139657 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 13:16:24.439638  139657 command_runner.go:130] > # reload'.
	I1213 13:16:24.439648  139657 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 13:16:24.439661  139657 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 13:16:24.439675  139657 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 13:16:24.439687  139657 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 13:16:24.439693  139657 command_runner.go:130] > [crio]
	I1213 13:16:24.439704  139657 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 13:16:24.439712  139657 command_runner.go:130] > # containers images, in this directory.
	I1213 13:16:24.439720  139657 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1213 13:16:24.439738  139657 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 13:16:24.439749  139657 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1213 13:16:24.439761  139657 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 13:16:24.439771  139657 command_runner.go:130] > # imagestore = ""
	I1213 13:16:24.439781  139657 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 13:16:24.439794  139657 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 13:16:24.439803  139657 command_runner.go:130] > # storage_driver = "overlay"
	I1213 13:16:24.439813  139657 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 13:16:24.439825  139657 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 13:16:24.439832  139657 command_runner.go:130] > storage_option = [
	I1213 13:16:24.439844  139657 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1213 13:16:24.439852  139657 command_runner.go:130] > ]
	I1213 13:16:24.439861  139657 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 13:16:24.439872  139657 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 13:16:24.439882  139657 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 13:16:24.439891  139657 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 13:16:24.439911  139657 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 13:16:24.439921  139657 command_runner.go:130] > # always happen on a node reboot
	I1213 13:16:24.439930  139657 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 13:16:24.439952  139657 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 13:16:24.439965  139657 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 13:16:24.439979  139657 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 13:16:24.439990  139657 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1213 13:16:24.440002  139657 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 13:16:24.440018  139657 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 13:16:24.440026  139657 command_runner.go:130] > # internal_wipe = true
	I1213 13:16:24.440039  139657 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 13:16:24.440051  139657 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 13:16:24.440059  139657 command_runner.go:130] > # internal_repair = false
	I1213 13:16:24.440068  139657 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 13:16:24.440095  139657 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 13:16:24.440115  139657 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 13:16:24.440127  139657 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 13:16:24.440141  139657 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 13:16:24.440150  139657 command_runner.go:130] > [crio.api]
	I1213 13:16:24.440158  139657 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 13:16:24.440169  139657 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 13:16:24.440178  139657 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 13:16:24.440188  139657 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 13:16:24.440198  139657 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 13:16:24.440210  139657 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 13:16:24.440217  139657 command_runner.go:130] > # stream_port = "0"
	I1213 13:16:24.440227  139657 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 13:16:24.440235  139657 command_runner.go:130] > # stream_enable_tls = false
	I1213 13:16:24.440245  139657 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 13:16:24.440256  139657 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 13:16:24.440267  139657 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 13:16:24.440289  139657 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1213 13:16:24.440298  139657 command_runner.go:130] > # minutes.
	I1213 13:16:24.440313  139657 command_runner.go:130] > # stream_tls_cert = ""
	I1213 13:16:24.440341  139657 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 13:16:24.440355  139657 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1213 13:16:24.440363  139657 command_runner.go:130] > # stream_tls_key = ""
	I1213 13:16:24.440375  139657 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 13:16:24.440386  139657 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 13:16:24.440416  139657 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1213 13:16:24.440425  139657 command_runner.go:130] > # stream_tls_ca = ""
	I1213 13:16:24.440437  139657 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 13:16:24.440447  139657 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1213 13:16:24.440460  139657 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 13:16:24.440470  139657 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1213 13:16:24.440480  139657 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 13:16:24.440492  139657 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 13:16:24.440498  139657 command_runner.go:130] > [crio.runtime]
	I1213 13:16:24.440510  139657 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 13:16:24.440519  139657 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 13:16:24.440528  139657 command_runner.go:130] > # "nofile=1024:2048"
	I1213 13:16:24.440538  139657 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 13:16:24.440547  139657 command_runner.go:130] > # default_ulimits = [
	I1213 13:16:24.440553  139657 command_runner.go:130] > # ]
	I1213 13:16:24.440565  139657 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 13:16:24.440572  139657 command_runner.go:130] > # no_pivot = false
	I1213 13:16:24.440582  139657 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 13:16:24.440592  139657 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 13:16:24.440603  139657 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 13:16:24.440612  139657 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 13:16:24.440623  139657 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 13:16:24.440635  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 13:16:24.440644  139657 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1213 13:16:24.440652  139657 command_runner.go:130] > # Cgroup setting for conmon
	I1213 13:16:24.440664  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 13:16:24.440672  139657 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 13:16:24.440690  139657 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 13:16:24.440701  139657 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 13:16:24.440713  139657 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 13:16:24.440726  139657 command_runner.go:130] > conmon_env = [
	I1213 13:16:24.440736  139657 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1213 13:16:24.440743  139657 command_runner.go:130] > ]
	I1213 13:16:24.440753  139657 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 13:16:24.440764  139657 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 13:16:24.440774  139657 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 13:16:24.440783  139657 command_runner.go:130] > # default_env = [
	I1213 13:16:24.440788  139657 command_runner.go:130] > # ]
	I1213 13:16:24.440801  139657 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 13:16:24.440813  139657 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 13:16:24.440822  139657 command_runner.go:130] > # selinux = false
	I1213 13:16:24.440831  139657 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 13:16:24.440844  139657 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1213 13:16:24.440853  139657 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1213 13:16:24.440860  139657 command_runner.go:130] > # seccomp_profile = ""
	I1213 13:16:24.440868  139657 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1213 13:16:24.440877  139657 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1213 13:16:24.440888  139657 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1213 13:16:24.440896  139657 command_runner.go:130] > # which might increase security.
	I1213 13:16:24.440904  139657 command_runner.go:130] > # This option is currently deprecated,
	I1213 13:16:24.440914  139657 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1213 13:16:24.440925  139657 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1213 13:16:24.440935  139657 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 13:16:24.440949  139657 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 13:16:24.440961  139657 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 13:16:24.440972  139657 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 13:16:24.440982  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.440989  139657 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 13:16:24.441001  139657 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 13:16:24.441008  139657 command_runner.go:130] > # the cgroup blockio controller.
	I1213 13:16:24.441025  139657 command_runner.go:130] > # blockio_config_file = ""
	I1213 13:16:24.441040  139657 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 13:16:24.441047  139657 command_runner.go:130] > # blockio parameters.
	I1213 13:16:24.441054  139657 command_runner.go:130] > # blockio_reload = false
	I1213 13:16:24.441065  139657 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 13:16:24.441088  139657 command_runner.go:130] > # irqbalance daemon.
	I1213 13:16:24.441100  139657 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 13:16:24.441116  139657 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 13:16:24.441138  139657 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 13:16:24.441152  139657 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 13:16:24.441171  139657 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 13:16:24.441183  139657 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 13:16:24.441194  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.441201  139657 command_runner.go:130] > # rdt_config_file = ""
	I1213 13:16:24.441210  139657 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 13:16:24.441217  139657 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 13:16:24.441272  139657 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 13:16:24.441283  139657 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 13:16:24.441291  139657 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 13:16:24.441300  139657 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 13:16:24.441306  139657 command_runner.go:130] > # will be added.
	I1213 13:16:24.441314  139657 command_runner.go:130] > # default_capabilities = [
	I1213 13:16:24.441320  139657 command_runner.go:130] > # 	"CHOWN",
	I1213 13:16:24.441328  139657 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 13:16:24.441334  139657 command_runner.go:130] > # 	"FSETID",
	I1213 13:16:24.441341  139657 command_runner.go:130] > # 	"FOWNER",
	I1213 13:16:24.441347  139657 command_runner.go:130] > # 	"SETGID",
	I1213 13:16:24.441355  139657 command_runner.go:130] > # 	"SETUID",
	I1213 13:16:24.441361  139657 command_runner.go:130] > # 	"SETPCAP",
	I1213 13:16:24.441368  139657 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 13:16:24.441375  139657 command_runner.go:130] > # 	"KILL",
	I1213 13:16:24.441381  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441394  139657 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 13:16:24.441414  139657 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 13:16:24.441425  139657 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 13:16:24.441436  139657 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 13:16:24.441449  139657 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 13:16:24.441457  139657 command_runner.go:130] > default_sysctls = [
	I1213 13:16:24.441465  139657 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 13:16:24.441471  139657 command_runner.go:130] > ]
	I1213 13:16:24.441479  139657 command_runner.go:130] > # List of devices on the host that a
	I1213 13:16:24.441492  139657 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 13:16:24.441499  139657 command_runner.go:130] > # allowed_devices = [
	I1213 13:16:24.441514  139657 command_runner.go:130] > # 	"/dev/fuse",
	I1213 13:16:24.441521  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441529  139657 command_runner.go:130] > # List of additional devices. specified as
	I1213 13:16:24.441544  139657 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 13:16:24.441554  139657 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 13:16:24.441563  139657 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 13:16:24.441577  139657 command_runner.go:130] > # additional_devices = [
	I1213 13:16:24.441583  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441592  139657 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 13:16:24.441599  139657 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 13:16:24.441606  139657 command_runner.go:130] > # 	"/etc/cdi",
	I1213 13:16:24.441615  139657 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 13:16:24.441620  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441631  139657 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 13:16:24.441644  139657 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 13:16:24.441653  139657 command_runner.go:130] > # Defaults to false.
	I1213 13:16:24.441661  139657 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 13:16:24.441674  139657 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 13:16:24.441685  139657 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 13:16:24.441694  139657 command_runner.go:130] > # hooks_dir = [
	I1213 13:16:24.441700  139657 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 13:16:24.441707  139657 command_runner.go:130] > # ]
	I1213 13:16:24.441719  139657 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 13:16:24.441739  139657 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 13:16:24.441751  139657 command_runner.go:130] > # its default mounts from the following two files:
	I1213 13:16:24.441757  139657 command_runner.go:130] > #
	I1213 13:16:24.441770  139657 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 13:16:24.441780  139657 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 13:16:24.441791  139657 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 13:16:24.441797  139657 command_runner.go:130] > #
	I1213 13:16:24.441809  139657 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 13:16:24.441819  139657 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 13:16:24.441832  139657 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 13:16:24.441841  139657 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 13:16:24.441849  139657 command_runner.go:130] > #
	I1213 13:16:24.441856  139657 command_runner.go:130] > # default_mounts_file = ""
	I1213 13:16:24.441866  139657 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 13:16:24.441877  139657 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 13:16:24.441886  139657 command_runner.go:130] > pids_limit = 1024
	I1213 13:16:24.441896  139657 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 13:16:24.441906  139657 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 13:16:24.441917  139657 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 13:16:24.441931  139657 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 13:16:24.441941  139657 command_runner.go:130] > # log_size_max = -1
	I1213 13:16:24.441953  139657 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 13:16:24.441963  139657 command_runner.go:130] > # log_to_journald = false
	I1213 13:16:24.441977  139657 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 13:16:24.441987  139657 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 13:16:24.441995  139657 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 13:16:24.442006  139657 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 13:16:24.442015  139657 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 13:16:24.442024  139657 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 13:16:24.442034  139657 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 13:16:24.442042  139657 command_runner.go:130] > # read_only = false
	I1213 13:16:24.442052  139657 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 13:16:24.442065  139657 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 13:16:24.442093  139657 command_runner.go:130] > # live configuration reload.
	I1213 13:16:24.442101  139657 command_runner.go:130] > # log_level = "info"
	I1213 13:16:24.442120  139657 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 13:16:24.442131  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.442139  139657 command_runner.go:130] > # log_filter = ""
	I1213 13:16:24.442149  139657 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 13:16:24.442163  139657 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 13:16:24.442172  139657 command_runner.go:130] > # separated by comma.
	I1213 13:16:24.442185  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442194  139657 command_runner.go:130] > # uid_mappings = ""
	I1213 13:16:24.442205  139657 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 13:16:24.442218  139657 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 13:16:24.442227  139657 command_runner.go:130] > # separated by comma.
	I1213 13:16:24.442244  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442254  139657 command_runner.go:130] > # gid_mappings = ""
	I1213 13:16:24.442264  139657 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 13:16:24.442277  139657 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 13:16:24.442289  139657 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 13:16:24.442302  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442310  139657 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 13:16:24.442320  139657 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 13:16:24.442333  139657 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 13:16:24.442344  139657 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 13:16:24.442357  139657 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 13:16:24.442364  139657 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 13:16:24.442373  139657 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 13:16:24.442391  139657 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 13:16:24.442402  139657 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 13:16:24.442409  139657 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 13:16:24.442419  139657 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 13:16:24.442430  139657 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 13:16:24.442441  139657 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 13:16:24.442450  139657 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 13:16:24.442467  139657 command_runner.go:130] > drop_infra_ctr = false
	I1213 13:16:24.442479  139657 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 13:16:24.442489  139657 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 13:16:24.442503  139657 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 13:16:24.442510  139657 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 13:16:24.442523  139657 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 13:16:24.442534  139657 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 13:16:24.442546  139657 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 13:16:24.442554  139657 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 13:16:24.442563  139657 command_runner.go:130] > # shared_cpuset = ""
	I1213 13:16:24.442572  139657 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 13:16:24.442581  139657 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 13:16:24.442589  139657 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 13:16:24.442601  139657 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 13:16:24.442608  139657 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1213 13:16:24.442618  139657 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 13:16:24.442631  139657 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 13:16:24.442640  139657 command_runner.go:130] > # enable_criu_support = false
	I1213 13:16:24.442650  139657 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 13:16:24.442660  139657 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 13:16:24.442667  139657 command_runner.go:130] > # enable_pod_events = false
	I1213 13:16:24.442677  139657 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 13:16:24.442688  139657 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 13:16:24.442699  139657 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 13:16:24.442706  139657 command_runner.go:130] > # default_runtime = "runc"
	I1213 13:16:24.442715  139657 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 13:16:24.442726  139657 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 13:16:24.442741  139657 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 13:16:24.442756  139657 command_runner.go:130] > # creation as a file is not desired either.
	I1213 13:16:24.442774  139657 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 13:16:24.442784  139657 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 13:16:24.442792  139657 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 13:16:24.442797  139657 command_runner.go:130] > # ]
	I1213 13:16:24.442815  139657 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 13:16:24.442828  139657 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 13:16:24.442840  139657 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 13:16:24.442851  139657 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 13:16:24.442856  139657 command_runner.go:130] > #
	I1213 13:16:24.442865  139657 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 13:16:24.442873  139657 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 13:16:24.442881  139657 command_runner.go:130] > # runtime_type = "oci"
	I1213 13:16:24.442949  139657 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 13:16:24.442960  139657 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 13:16:24.442967  139657 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 13:16:24.442973  139657 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 13:16:24.442978  139657 command_runner.go:130] > # monitor_env = []
	I1213 13:16:24.442986  139657 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 13:16:24.442993  139657 command_runner.go:130] > # allowed_annotations = []
	I1213 13:16:24.443003  139657 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 13:16:24.443012  139657 command_runner.go:130] > # Where:
	I1213 13:16:24.443020  139657 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 13:16:24.443031  139657 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 13:16:24.443049  139657 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 13:16:24.443061  139657 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 13:16:24.443080  139657 command_runner.go:130] > #   in $PATH.
	I1213 13:16:24.443104  139657 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 13:16:24.443121  139657 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 13:16:24.443132  139657 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 13:16:24.443140  139657 command_runner.go:130] > #   state.
	I1213 13:16:24.443151  139657 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 13:16:24.443162  139657 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 13:16:24.443173  139657 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 13:16:24.443185  139657 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 13:16:24.443195  139657 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 13:16:24.443209  139657 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 13:16:24.443220  139657 command_runner.go:130] > #   The currently recognized values are:
	I1213 13:16:24.443242  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 13:16:24.443258  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 13:16:24.443270  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 13:16:24.443280  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 13:16:24.443293  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 13:16:24.443305  139657 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 13:16:24.443319  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 13:16:24.443332  139657 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 13:16:24.443342  139657 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 13:16:24.443354  139657 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 13:16:24.443362  139657 command_runner.go:130] > #   deprecated option "conmon".
	I1213 13:16:24.443374  139657 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 13:16:24.443385  139657 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 13:16:24.443397  139657 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 13:16:24.443407  139657 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 13:16:24.443418  139657 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1213 13:16:24.443429  139657 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 13:16:24.443440  139657 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 13:16:24.443452  139657 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 13:16:24.443457  139657 command_runner.go:130] > #
	I1213 13:16:24.443467  139657 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 13:16:24.443473  139657 command_runner.go:130] > #
	I1213 13:16:24.443482  139657 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 13:16:24.443496  139657 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 13:16:24.443504  139657 command_runner.go:130] > #
	I1213 13:16:24.443514  139657 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 13:16:24.443525  139657 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 13:16:24.443533  139657 command_runner.go:130] > #
	I1213 13:16:24.443544  139657 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 13:16:24.443550  139657 command_runner.go:130] > # feature.
	I1213 13:16:24.443555  139657 command_runner.go:130] > #
	I1213 13:16:24.443567  139657 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 13:16:24.443577  139657 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 13:16:24.443598  139657 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 13:16:24.443613  139657 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 13:16:24.443628  139657 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 13:16:24.443636  139657 command_runner.go:130] > #
	I1213 13:16:24.443646  139657 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 13:16:24.443659  139657 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 13:16:24.443667  139657 command_runner.go:130] > #
	I1213 13:16:24.443676  139657 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 13:16:24.443688  139657 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 13:16:24.443694  139657 command_runner.go:130] > #
	I1213 13:16:24.443705  139657 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 13:16:24.443718  139657 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 13:16:24.443725  139657 command_runner.go:130] > # limitation.
	I1213 13:16:24.443734  139657 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 13:16:24.443740  139657 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1213 13:16:24.443747  139657 command_runner.go:130] > runtime_type = "oci"
	I1213 13:16:24.443755  139657 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 13:16:24.443766  139657 command_runner.go:130] > runtime_config_path = ""
	I1213 13:16:24.443773  139657 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1213 13:16:24.443779  139657 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 13:16:24.443786  139657 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 13:16:24.443792  139657 command_runner.go:130] > monitor_env = [
	I1213 13:16:24.443802  139657 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1213 13:16:24.443810  139657 command_runner.go:130] > ]
	I1213 13:16:24.443818  139657 command_runner.go:130] > privileged_without_host_devices = false
	I1213 13:16:24.443830  139657 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 13:16:24.443839  139657 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 13:16:24.443849  139657 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 13:16:24.443863  139657 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 13:16:24.443876  139657 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1213 13:16:24.443887  139657 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 13:16:24.443903  139657 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 13:16:24.443918  139657 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 13:16:24.443936  139657 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 13:16:24.443950  139657 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 13:16:24.443956  139657 command_runner.go:130] > # Example:
	I1213 13:16:24.443964  139657 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 13:16:24.443971  139657 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 13:16:24.443984  139657 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 13:16:24.443994  139657 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 13:16:24.444004  139657 command_runner.go:130] > # cpuset = 0
	I1213 13:16:24.444013  139657 command_runner.go:130] > # cpushares = "0-1"
	I1213 13:16:24.444019  139657 command_runner.go:130] > # Where:
	I1213 13:16:24.444027  139657 command_runner.go:130] > # The workload name is workload-type.
	I1213 13:16:24.444038  139657 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 13:16:24.444050  139657 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 13:16:24.444060  139657 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 13:16:24.444086  139657 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 13:16:24.444097  139657 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 13:16:24.444112  139657 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 13:16:24.444127  139657 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 13:16:24.444136  139657 command_runner.go:130] > # Default value is set to true
	I1213 13:16:24.444143  139657 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 13:16:24.444152  139657 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 13:16:24.444162  139657 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 13:16:24.444170  139657 command_runner.go:130] > # Default value is set to 'false'
	I1213 13:16:24.444179  139657 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 13:16:24.444194  139657 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 13:16:24.444202  139657 command_runner.go:130] > #
	I1213 13:16:24.444212  139657 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 13:16:24.444227  139657 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1213 13:16:24.444240  139657 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1213 13:16:24.444250  139657 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1213 13:16:24.444260  139657 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1213 13:16:24.444277  139657 command_runner.go:130] > [crio.image]
	I1213 13:16:24.444290  139657 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 13:16:24.444308  139657 command_runner.go:130] > # default_transport = "docker://"
	I1213 13:16:24.444322  139657 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 13:16:24.444336  139657 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 13:16:24.444346  139657 command_runner.go:130] > # global_auth_file = ""
	I1213 13:16:24.444357  139657 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 13:16:24.444366  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.444377  139657 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 13:16:24.444388  139657 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 13:16:24.444401  139657 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 13:16:24.444411  139657 command_runner.go:130] > # This option supports live configuration reload.
	I1213 13:16:24.444418  139657 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 13:16:24.444432  139657 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 13:16:24.444443  139657 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 13:16:24.444456  139657 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 13:16:24.444465  139657 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 13:16:24.444475  139657 command_runner.go:130] > # pause_command = "/pause"
	I1213 13:16:24.444485  139657 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 13:16:24.444498  139657 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 13:16:24.444510  139657 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 13:16:24.444522  139657 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 13:16:24.444533  139657 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 13:16:24.444547  139657 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 13:16:24.444555  139657 command_runner.go:130] > # pinned_images = [
	I1213 13:16:24.444560  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444570  139657 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 13:16:24.444583  139657 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 13:16:24.444593  139657 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 13:16:24.444612  139657 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 13:16:24.444624  139657 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 13:16:24.444632  139657 command_runner.go:130] > # signature_policy = ""
	I1213 13:16:24.444644  139657 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 13:16:24.444655  139657 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 13:16:24.444668  139657 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 13:16:24.444686  139657 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 13:16:24.444698  139657 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 13:16:24.444707  139657 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 13:16:24.444717  139657 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 13:16:24.444730  139657 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 13:16:24.444737  139657 command_runner.go:130] > # changing them here.
	I1213 13:16:24.444744  139657 command_runner.go:130] > # insecure_registries = [
	I1213 13:16:24.444749  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444762  139657 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 13:16:24.444771  139657 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 13:16:24.444780  139657 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 13:16:24.444788  139657 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 13:16:24.444796  139657 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 13:16:24.444807  139657 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 13:16:24.444818  139657 command_runner.go:130] > # CNI plugins.
	I1213 13:16:24.444827  139657 command_runner.go:130] > [crio.network]
	I1213 13:16:24.444837  139657 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 13:16:24.444847  139657 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 13:16:24.444854  139657 command_runner.go:130] > # cni_default_network = ""
	I1213 13:16:24.444863  139657 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 13:16:24.444871  139657 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 13:16:24.444880  139657 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 13:16:24.444887  139657 command_runner.go:130] > # plugin_dirs = [
	I1213 13:16:24.444894  139657 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 13:16:24.444898  139657 command_runner.go:130] > # ]
	I1213 13:16:24.444913  139657 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 13:16:24.444923  139657 command_runner.go:130] > [crio.metrics]
	I1213 13:16:24.444931  139657 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 13:16:24.444941  139657 command_runner.go:130] > enable_metrics = true
	I1213 13:16:24.444949  139657 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 13:16:24.444959  139657 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 13:16:24.444971  139657 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 13:16:24.444984  139657 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 13:16:24.445004  139657 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 13:16:24.445013  139657 command_runner.go:130] > # metrics_collectors = [
	I1213 13:16:24.445020  139657 command_runner.go:130] > # 	"operations",
	I1213 13:16:24.445031  139657 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1213 13:16:24.445038  139657 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1213 13:16:24.445045  139657 command_runner.go:130] > # 	"operations_errors",
	I1213 13:16:24.445052  139657 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1213 13:16:24.445060  139657 command_runner.go:130] > # 	"image_pulls_by_name",
	I1213 13:16:24.445068  139657 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1213 13:16:24.445085  139657 command_runner.go:130] > # 	"image_pulls_failures",
	I1213 13:16:24.445092  139657 command_runner.go:130] > # 	"image_pulls_successes",
	I1213 13:16:24.445099  139657 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 13:16:24.445110  139657 command_runner.go:130] > # 	"image_layer_reuse",
	I1213 13:16:24.445121  139657 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 13:16:24.445128  139657 command_runner.go:130] > # 	"containers_oom_total",
	I1213 13:16:24.445134  139657 command_runner.go:130] > # 	"containers_oom",
	I1213 13:16:24.445141  139657 command_runner.go:130] > # 	"processes_defunct",
	I1213 13:16:24.445147  139657 command_runner.go:130] > # 	"operations_total",
	I1213 13:16:24.445155  139657 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 13:16:24.445163  139657 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 13:16:24.445170  139657 command_runner.go:130] > # 	"operations_errors_total",
	I1213 13:16:24.445178  139657 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 13:16:24.445186  139657 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 13:16:24.445194  139657 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 13:16:24.445202  139657 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 13:16:24.445210  139657 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 13:16:24.445218  139657 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 13:16:24.445231  139657 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 13:16:24.445238  139657 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 13:16:24.445244  139657 command_runner.go:130] > # ]
	I1213 13:16:24.445253  139657 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 13:16:24.445259  139657 command_runner.go:130] > # metrics_port = 9090
	I1213 13:16:24.445268  139657 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 13:16:24.445284  139657 command_runner.go:130] > # metrics_socket = ""
	I1213 13:16:24.445295  139657 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 13:16:24.445306  139657 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 13:16:24.445319  139657 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 13:16:24.445328  139657 command_runner.go:130] > # certificate on any modification event.
	I1213 13:16:24.445335  139657 command_runner.go:130] > # metrics_cert = ""
	I1213 13:16:24.445344  139657 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 13:16:24.445355  139657 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 13:16:24.445360  139657 command_runner.go:130] > # metrics_key = ""
	I1213 13:16:24.445370  139657 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 13:16:24.445379  139657 command_runner.go:130] > [crio.tracing]
	I1213 13:16:24.445387  139657 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 13:16:24.445394  139657 command_runner.go:130] > # enable_tracing = false
	I1213 13:16:24.445403  139657 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 13:16:24.445413  139657 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1213 13:16:24.445424  139657 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 13:16:24.445435  139657 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 13:16:24.445444  139657 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 13:16:24.445450  139657 command_runner.go:130] > [crio.nri]
	I1213 13:16:24.445457  139657 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 13:16:24.445465  139657 command_runner.go:130] > # enable_nri = false
	I1213 13:16:24.445471  139657 command_runner.go:130] > # NRI socket to listen on.
	I1213 13:16:24.445479  139657 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 13:16:24.445490  139657 command_runner.go:130] > # NRI plugin directory to use.
	I1213 13:16:24.445498  139657 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 13:16:24.445509  139657 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 13:16:24.445518  139657 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 13:16:24.445528  139657 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 13:16:24.445539  139657 command_runner.go:130] > # nri_disable_connections = false
	I1213 13:16:24.445548  139657 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 13:16:24.445556  139657 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 13:16:24.445564  139657 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 13:16:24.445572  139657 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 13:16:24.445606  139657 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 13:16:24.445616  139657 command_runner.go:130] > [crio.stats]
	I1213 13:16:24.445625  139657 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 13:16:24.445640  139657 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 13:16:24.445648  139657 command_runner.go:130] > # stats_collection_period = 0
	I1213 13:16:24.445769  139657 cni.go:84] Creating CNI manager for ""
	I1213 13:16:24.445787  139657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:16:24.445812  139657 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:16:24.445847  139657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.124 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101171 NodeName:functional-101171 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:16:24.446054  139657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.124
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-101171"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.124"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.124"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:16:24.446191  139657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:16:24.458394  139657 command_runner.go:130] > kubeadm
	I1213 13:16:24.458424  139657 command_runner.go:130] > kubectl
	I1213 13:16:24.458446  139657 command_runner.go:130] > kubelet
	I1213 13:16:24.458789  139657 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:16:24.458853  139657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:16:24.471347  139657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1213 13:16:24.493805  139657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:16:24.515984  139657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1213 13:16:24.538444  139657 ssh_runner.go:195] Run: grep 192.168.39.124	control-plane.minikube.internal$ /etc/hosts
	I1213 13:16:24.543369  139657 command_runner.go:130] > 192.168.39.124	control-plane.minikube.internal
	I1213 13:16:24.543465  139657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:16:24.727714  139657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:16:24.748340  139657 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171 for IP: 192.168.39.124
	I1213 13:16:24.748371  139657 certs.go:195] generating shared ca certs ...
	I1213 13:16:24.748391  139657 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:16:24.748616  139657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
	I1213 13:16:24.748684  139657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
	I1213 13:16:24.748697  139657 certs.go:257] generating profile certs ...
	I1213 13:16:24.748799  139657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/client.key
	I1213 13:16:24.748886  139657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key.194f038f
	I1213 13:16:24.748927  139657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key
	I1213 13:16:24.748940  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 13:16:24.748961  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 13:16:24.748976  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 13:16:24.748999  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 13:16:24.749016  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 13:16:24.749031  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 13:16:24.749046  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 13:16:24.749066  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 13:16:24.749158  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem (1338 bytes)
	W1213 13:16:24.749196  139657 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234_empty.pem, impossibly tiny 0 bytes
	I1213 13:16:24.749208  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:16:24.749236  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:16:24.749267  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:16:24.749300  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
	I1213 13:16:24.749360  139657 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 13:16:24.749402  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:24.749419  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem -> /usr/share/ca-certificates/135234.pem
	I1213 13:16:24.749434  139657 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> /usr/share/ca-certificates/1352342.pem
	I1213 13:16:24.750215  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:16:24.784325  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 13:16:24.817785  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:16:24.853144  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 13:16:24.890536  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:16:24.926567  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:16:24.962010  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:16:24.998369  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-101171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:16:25.032230  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:16:25.068964  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem --> /usr/share/ca-certificates/135234.pem (1338 bytes)
	I1213 13:16:25.102766  139657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /usr/share/ca-certificates/1352342.pem (1708 bytes)
	I1213 13:16:25.136252  139657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:16:25.160868  139657 ssh_runner.go:195] Run: openssl version
	I1213 13:16:25.169220  139657 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1213 13:16:25.169344  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.182662  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1352342.pem /etc/ssl/certs/1352342.pem
	I1213 13:16:25.196346  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202552  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202645  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.202700  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1352342.pem
	I1213 13:16:25.211067  139657 command_runner.go:130] > 3ec20f2e
	I1213 13:16:25.211253  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:16:25.224328  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.238368  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:16:25.252003  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258273  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258311  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.258360  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:16:25.266989  139657 command_runner.go:130] > b5213941
	I1213 13:16:25.267145  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:16:25.280410  139657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.293801  139657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135234.pem /etc/ssl/certs/135234.pem
	I1213 13:16:25.308024  139657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.313993  139657 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.314032  139657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.314112  139657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135234.pem
	I1213 13:16:25.322512  139657 command_runner.go:130] > 51391683
	I1213 13:16:25.322716  139657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:16:25.335714  139657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:16:25.341584  139657 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:16:25.341629  139657 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 13:16:25.341635  139657 command_runner.go:130] > Device: 253,1	Inode: 7338073     Links: 1
	I1213 13:16:25.341641  139657 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 13:16:25.341647  139657 command_runner.go:130] > Access: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341652  139657 command_runner.go:130] > Modify: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341657  139657 command_runner.go:130] > Change: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341662  139657 command_runner.go:130] >  Birth: 2025-12-13 13:13:54.213466193 +0000
	I1213 13:16:25.341740  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:16:25.350002  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.350186  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:16:25.358329  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.358448  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:16:25.366344  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.366481  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:16:25.374941  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.375017  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:16:25.383466  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.383560  139657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:16:25.391728  139657 command_runner.go:130] > Certificate will not expire
	I1213 13:16:25.391825  139657 kubeadm.go:401] StartCluster: {Name:functional-101171 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:functional-101171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:16:25.391949  139657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:16:25.392028  139657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:16:25.432281  139657 command_runner.go:130] > f92a3a092485a0ac1dc51a2bc6f50ba873a8493104faa8027f92b47afffd326c
	I1213 13:16:25.432316  139657 command_runner.go:130] > 0f7e3e7bcf1b4fc58d523ea0a6b71f4b7f6159f908472192dec50c5c4773a6c8
	I1213 13:16:25.432327  139657 command_runner.go:130] > 82d65eddb23627c8c7b03d97ee25384a4641be44a8ce176195431ba631e420a4
	I1213 13:16:25.432337  139657 command_runner.go:130] > c035d6ae568bcf65e2e0e0ac9c8e33c9683cfa5e9962808be5bc1d7e90560b68
	I1213 13:16:25.432345  139657 command_runner.go:130] > f2e4d14cfaaeb50496758e8c7af82df0842b56679f7302760beb406f1d2377b0
	I1213 13:16:25.432364  139657 command_runner.go:130] > 5c5106dd6f44ef172d73e559df759af56aae17be00dbd7bda168113b5c87103e
	I1213 13:16:25.432372  139657 command_runner.go:130] > 9098da3bf6a16aa5aca362d77b4eefdf3d8740ee47058bac1f57462956a0ec41
	I1213 13:16:25.432382  139657 command_runner.go:130] > 032a755151e3edddee963cde3642ebab28ccd3cad4f977f5abe9be2793036fd5
	I1213 13:16:25.432392  139657 command_runner.go:130] > f8b0288ee3d2f686e17cab2f0126717e4773c0a011bf820a99b08c7146415889
	I1213 13:16:25.432405  139657 command_runner.go:130] > cb7606d3b6d8f2b73f95595faf6894b2622d71cebaf6f7aa31ae8cac07f16b57
	I1213 13:16:25.432417  139657 command_runner.go:130] > f02d47f5908b9925ba08e11c9c86ffc993d978b0210bc885a88444e31b6a2a63
	I1213 13:16:25.432448  139657 cri.go:89] found id: "f92a3a092485a0ac1dc51a2bc6f50ba873a8493104faa8027f92b47afffd326c"
	I1213 13:16:25.432463  139657 cri.go:89] found id: "0f7e3e7bcf1b4fc58d523ea0a6b71f4b7f6159f908472192dec50c5c4773a6c8"
	I1213 13:16:25.432471  139657 cri.go:89] found id: "82d65eddb23627c8c7b03d97ee25384a4641be44a8ce176195431ba631e420a4"
	I1213 13:16:25.432481  139657 cri.go:89] found id: "c035d6ae568bcf65e2e0e0ac9c8e33c9683cfa5e9962808be5bc1d7e90560b68"
	I1213 13:16:25.432487  139657 cri.go:89] found id: "f2e4d14cfaaeb50496758e8c7af82df0842b56679f7302760beb406f1d2377b0"
	I1213 13:16:25.432495  139657 cri.go:89] found id: "5c5106dd6f44ef172d73e559df759af56aae17be00dbd7bda168113b5c87103e"
	I1213 13:16:25.432501  139657 cri.go:89] found id: "9098da3bf6a16aa5aca362d77b4eefdf3d8740ee47058bac1f57462956a0ec41"
	I1213 13:16:25.432510  139657 cri.go:89] found id: "032a755151e3edddee963cde3642ebab28ccd3cad4f977f5abe9be2793036fd5"
	I1213 13:16:25.432516  139657 cri.go:89] found id: "f8b0288ee3d2f686e17cab2f0126717e4773c0a011bf820a99b08c7146415889"
	I1213 13:16:25.432528  139657 cri.go:89] found id: "cb7606d3b6d8f2b73f95595faf6894b2622d71cebaf6f7aa31ae8cac07f16b57"
	I1213 13:16:25.432537  139657 cri.go:89] found id: "f02d47f5908b9925ba08e11c9c86ffc993d978b0210bc885a88444e31b6a2a63"
	I1213 13:16:25.432544  139657 cri.go:89] found id: ""
	I1213 13:16:25.432611  139657 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-101171 -n functional-101171
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-101171 -n functional-101171: exit status 2 (200.523721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-101171" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (639.01s)

                                                
                                    
x
+
TestFunctional/parallel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel
functional_test.go:184: Unable to run more tests (deadline exceeded)
--- FAIL: TestFunctional/parallel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (2.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image rm kicbase/echo-server:functional-359736 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 image rm kicbase/echo-server:functional-359736 --alsologtostderr: (2.664057366s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls
functional_test.go:418: expected "kicbase/echo-server:functional-359736" to be removed from minikube but still exists
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (2.89s)

                                                
                                    
x
+
TestPreload (143.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-820437 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1213 14:41:15.655653  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-820437 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m27.459863347s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-820437 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-820437 image pull gcr.io/k8s-minikube/busybox: (3.693839685s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-820437
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-820437: (7.153994625s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-820437 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-820437 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (42.776054466s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-820437 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-13 14:42:57.679815093 +0000 UTC m=+5863.127632206
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-820437 -n test-preload-820437
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-820437 logs -n 25
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-911357 ssh -n multinode-911357-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ ssh     │ multinode-911357 ssh -n multinode-911357 sudo cat /home/docker/cp-test_multinode-911357-m03_multinode-911357.txt                                          │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ cp      │ multinode-911357 cp multinode-911357-m03:/home/docker/cp-test.txt multinode-911357-m02:/home/docker/cp-test_multinode-911357-m03_multinode-911357-m02.txt │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ ssh     │ multinode-911357 ssh -n multinode-911357-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ ssh     │ multinode-911357 ssh -n multinode-911357-m02 sudo cat /home/docker/cp-test_multinode-911357-m03_multinode-911357-m02.txt                                  │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ node    │ multinode-911357 node stop m03                                                                                                                            │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ node    │ multinode-911357 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:31 UTC │
	│ node    │ list -p multinode-911357                                                                                                                                  │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:31 UTC │                     │
	│ stop    │ -p multinode-911357                                                                                                                                       │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:31 UTC │ 13 Dec 25 14:33 UTC │
	│ start   │ -p multinode-911357 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:33 UTC │ 13 Dec 25 14:35 UTC │
	│ node    │ list -p multinode-911357                                                                                                                                  │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:35 UTC │                     │
	│ node    │ multinode-911357 node delete m03                                                                                                                          │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:35 UTC │ 13 Dec 25 14:35 UTC │
	│ stop    │ multinode-911357 stop                                                                                                                                     │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:35 UTC │ 13 Dec 25 14:38 UTC │
	│ start   │ -p multinode-911357 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:38 UTC │ 13 Dec 25 14:39 UTC │
	│ node    │ list -p multinode-911357                                                                                                                                  │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:39 UTC │                     │
	│ start   │ -p multinode-911357-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-911357-m02 │ jenkins │ v1.37.0 │ 13 Dec 25 14:39 UTC │                     │
	│ start   │ -p multinode-911357-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-911357-m03 │ jenkins │ v1.37.0 │ 13 Dec 25 14:39 UTC │ 13 Dec 25 14:40 UTC │
	│ node    │ add -p multinode-911357                                                                                                                                   │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │                     │
	│ delete  │ -p multinode-911357-m03                                                                                                                                   │ multinode-911357-m03 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ delete  │ -p multinode-911357                                                                                                                                       │ multinode-911357     │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ start   │ -p test-preload-820437 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-820437  │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:42 UTC │
	│ image   │ test-preload-820437 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-820437  │ jenkins │ v1.37.0 │ 13 Dec 25 14:42 UTC │ 13 Dec 25 14:42 UTC │
	│ stop    │ -p test-preload-820437                                                                                                                                    │ test-preload-820437  │ jenkins │ v1.37.0 │ 13 Dec 25 14:42 UTC │ 13 Dec 25 14:42 UTC │
	│ start   │ -p test-preload-820437 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-820437  │ jenkins │ v1.37.0 │ 13 Dec 25 14:42 UTC │ 13 Dec 25 14:42 UTC │
	│ image   │ test-preload-820437 image list                                                                                                                            │ test-preload-820437  │ jenkins │ v1.37.0 │ 13 Dec 25 14:42 UTC │ 13 Dec 25 14:42 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:42:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:42:14.769436  168622 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:42:14.769559  168622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:42:14.769566  168622 out.go:374] Setting ErrFile to fd 2...
	I1213 14:42:14.769571  168622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:42:14.769796  168622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:42:14.770247  168622 out.go:368] Setting JSON to false
	I1213 14:42:14.771105  168622 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8675,"bootTime":1765628260,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 14:42:14.771163  168622 start.go:143] virtualization: kvm guest
	I1213 14:42:14.772939  168622 out.go:179] * [test-preload-820437] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 14:42:14.774213  168622 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:42:14.774258  168622 notify.go:221] Checking for updates...
	I1213 14:42:14.776125  168622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:42:14.777247  168622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:42:14.778273  168622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:42:14.779392  168622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 14:42:14.780491  168622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:42:14.781892  168622 config.go:182] Loaded profile config "test-preload-820437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:42:14.782373  168622 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:42:14.815889  168622 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 14:42:14.817021  168622 start.go:309] selected driver: kvm2
	I1213 14:42:14.817034  168622 start.go:927] validating driver "kvm2" against &{Name:test-preload-820437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-820437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:42:14.817145  168622 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:42:14.817979  168622 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:42:14.818014  168622 cni.go:84] Creating CNI manager for ""
	I1213 14:42:14.818085  168622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:42:14.818147  168622 start.go:353] cluster config:
	{Name:test-preload-820437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-820437 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:42:14.818238  168622 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:42:14.819518  168622 out.go:179] * Starting "test-preload-820437" primary control-plane node in "test-preload-820437" cluster
	I1213 14:42:14.820578  168622 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 14:42:14.820612  168622 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 14:42:14.820623  168622 cache.go:65] Caching tarball of preloaded images
	I1213 14:42:14.820706  168622 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 14:42:14.820716  168622 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 14:42:14.820802  168622 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/config.json ...
	I1213 14:42:14.820996  168622 start.go:360] acquireMachinesLock for test-preload-820437: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 14:42:14.821042  168622 start.go:364] duration metric: took 27.276µs to acquireMachinesLock for "test-preload-820437"
	I1213 14:42:14.821056  168622 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:42:14.821061  168622 fix.go:54] fixHost starting: 
	I1213 14:42:14.822973  168622 fix.go:112] recreateIfNeeded on test-preload-820437: state=Stopped err=<nil>
	W1213 14:42:14.822999  168622 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:42:14.824452  168622 out.go:252] * Restarting existing kvm2 VM for "test-preload-820437" ...
	I1213 14:42:14.824485  168622 main.go:143] libmachine: starting domain...
	I1213 14:42:14.824500  168622 main.go:143] libmachine: ensuring networks are active...
	I1213 14:42:14.825209  168622 main.go:143] libmachine: Ensuring network default is active
	I1213 14:42:14.825528  168622 main.go:143] libmachine: Ensuring network mk-test-preload-820437 is active
	I1213 14:42:14.825862  168622 main.go:143] libmachine: getting domain XML...
	I1213 14:42:14.826802  168622 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-820437</name>
	  <uuid>413ccd68-6972-4b42-8c19-e40fbcb71a84</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/test-preload-820437/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/test-preload-820437/test-preload-820437.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:8a:9f'/>
	      <source network='mk-test-preload-820437'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ed:7a:f2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 14:42:16.090277  168622 main.go:143] libmachine: waiting for domain to start...
	I1213 14:42:16.091705  168622 main.go:143] libmachine: domain is now running
	I1213 14:42:16.091723  168622 main.go:143] libmachine: waiting for IP...
	I1213 14:42:16.092547  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:16.093184  168622 main.go:143] libmachine: domain test-preload-820437 has current primary IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:16.093197  168622 main.go:143] libmachine: found domain IP: 192.168.39.109
	I1213 14:42:16.093202  168622 main.go:143] libmachine: reserving static IP address...
	I1213 14:42:16.093594  168622 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-820437", mac: "52:54:00:04:8a:9f", ip: "192.168.39.109"} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:40:51 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:16.093624  168622 main.go:143] libmachine: skip adding static IP to network mk-test-preload-820437 - found existing host DHCP lease matching {name: "test-preload-820437", mac: "52:54:00:04:8a:9f", ip: "192.168.39.109"}
	I1213 14:42:16.093635  168622 main.go:143] libmachine: reserved static IP address 192.168.39.109 for domain test-preload-820437
	I1213 14:42:16.093639  168622 main.go:143] libmachine: waiting for SSH...
	I1213 14:42:16.093645  168622 main.go:143] libmachine: Getting to WaitForSSH function...
	I1213 14:42:16.096044  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:16.096397  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:40:51 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:16.096419  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:16.096562  168622 main.go:143] libmachine: Using SSH client type: native
	I1213 14:42:16.096836  168622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1213 14:42:16.096854  168622 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1213 14:42:19.208320  168622 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I1213 14:42:25.288383  168622 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I1213 14:42:28.403106  168622 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:42:28.406847  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.407342  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:28.407377  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.407593  168622 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/config.json ...
	I1213 14:42:28.407800  168622 machine.go:94] provisionDockerMachine start ...
	I1213 14:42:28.409996  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.410355  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:28.410391  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.410526  168622 main.go:143] libmachine: Using SSH client type: native
	I1213 14:42:28.410724  168622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1213 14:42:28.410735  168622 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:42:28.525107  168622 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 14:42:28.525136  168622 buildroot.go:166] provisioning hostname "test-preload-820437"
	I1213 14:42:28.528511  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.528978  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:28.529027  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.529283  168622 main.go:143] libmachine: Using SSH client type: native
	I1213 14:42:28.529613  168622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1213 14:42:28.529638  168622 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-820437 && echo "test-preload-820437" | sudo tee /etc/hostname
	I1213 14:42:28.662510  168622 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-820437
	
	I1213 14:42:28.665250  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.665758  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:28.665781  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.666007  168622 main.go:143] libmachine: Using SSH client type: native
	I1213 14:42:28.666278  168622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1213 14:42:28.666296  168622 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-820437' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-820437/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-820437' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:42:28.791557  168622 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:42:28.791593  168622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
	I1213 14:42:28.791627  168622 buildroot.go:174] setting up certificates
	I1213 14:42:28.791639  168622 provision.go:84] configureAuth start
	I1213 14:42:28.794577  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.795011  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:28.795045  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.797355  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.797722  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:28.797748  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.797889  168622 provision.go:143] copyHostCerts
	I1213 14:42:28.797949  168622 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem, removing ...
	I1213 14:42:28.797965  168622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 14:42:28.798033  168622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
	I1213 14:42:28.798170  168622 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem, removing ...
	I1213 14:42:28.798181  168622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 14:42:28.798211  168622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
	I1213 14:42:28.798279  168622 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem, removing ...
	I1213 14:42:28.798286  168622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 14:42:28.798310  168622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
	I1213 14:42:28.798364  168622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.test-preload-820437 san=[127.0.0.1 192.168.39.109 localhost minikube test-preload-820437]
	I1213 14:42:28.860568  168622 provision.go:177] copyRemoteCerts
	I1213 14:42:28.860639  168622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:42:28.863396  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.863768  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:28.863791  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:28.863942  168622 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/test-preload-820437/id_rsa Username:docker}
	I1213 14:42:28.955871  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 14:42:28.987147  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 14:42:29.017790  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:42:29.047881  168622 provision.go:87] duration metric: took 256.228492ms to configureAuth
	I1213 14:42:29.047918  168622 buildroot.go:189] setting minikube options for container-runtime
	I1213 14:42:29.048104  168622 config.go:182] Loaded profile config "test-preload-820437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:42:29.050916  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.051311  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:29.051332  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.051497  168622 main.go:143] libmachine: Using SSH client type: native
	I1213 14:42:29.051756  168622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1213 14:42:29.051772  168622 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 14:42:29.311910  168622 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 14:42:29.311940  168622 machine.go:97] duration metric: took 904.128221ms to provisionDockerMachine
	I1213 14:42:29.311952  168622 start.go:293] postStartSetup for "test-preload-820437" (driver="kvm2")
	I1213 14:42:29.311963  168622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:42:29.312019  168622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:42:29.314731  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.315194  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:29.315219  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.315362  168622 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/test-preload-820437/id_rsa Username:docker}
	I1213 14:42:29.404192  168622 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:42:29.409104  168622 info.go:137] Remote host: Buildroot 2025.02
	I1213 14:42:29.409132  168622 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
	I1213 14:42:29.409195  168622 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
	I1213 14:42:29.409277  168622 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> 1352342.pem in /etc/ssl/certs
	I1213 14:42:29.409373  168622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 14:42:29.421175  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 14:42:29.451748  168622 start.go:296] duration metric: took 139.77781ms for postStartSetup
	I1213 14:42:29.451794  168622 fix.go:56] duration metric: took 14.630731478s for fixHost
	I1213 14:42:29.454641  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.455057  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:29.455091  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.455278  168622 main.go:143] libmachine: Using SSH client type: native
	I1213 14:42:29.455577  168622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1213 14:42:29.455593  168622 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 14:42:29.571615  168622 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765636949.535775979
	
	I1213 14:42:29.571640  168622 fix.go:216] guest clock: 1765636949.535775979
	I1213 14:42:29.571652  168622 fix.go:229] Guest: 2025-12-13 14:42:29.535775979 +0000 UTC Remote: 2025-12-13 14:42:29.451798278 +0000 UTC m=+14.731846127 (delta=83.977701ms)
	I1213 14:42:29.571669  168622 fix.go:200] guest clock delta is within tolerance: 83.977701ms
	I1213 14:42:29.571673  168622 start.go:83] releasing machines lock for "test-preload-820437", held for 14.750622839s
	I1213 14:42:29.574525  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.574951  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:29.574976  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.575488  168622 ssh_runner.go:195] Run: cat /version.json
	I1213 14:42:29.575594  168622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:42:29.578796  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.579247  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.579249  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:29.579312  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.579523  168622 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/test-preload-820437/id_rsa Username:docker}
	I1213 14:42:29.579790  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:29.579819  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:29.580010  168622 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/test-preload-820437/id_rsa Username:docker}
	I1213 14:42:29.661930  168622 ssh_runner.go:195] Run: systemctl --version
	I1213 14:42:29.692824  168622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 14:42:29.837432  168622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:42:29.844650  168622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:42:29.844734  168622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:42:29.864663  168622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 14:42:29.864692  168622 start.go:496] detecting cgroup driver to use...
	I1213 14:42:29.864775  168622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 14:42:29.882817  168622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 14:42:29.899679  168622 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:42:29.899764  168622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:42:29.917614  168622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:42:29.935165  168622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:42:30.087218  168622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:42:30.305555  168622 docker.go:234] disabling docker service ...
	I1213 14:42:30.305631  168622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:42:30.323382  168622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:42:30.339951  168622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:42:30.502530  168622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:42:30.646260  168622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:42:30.662780  168622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:42:30.686051  168622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 14:42:30.686170  168622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:42:30.698704  168622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 14:42:30.698791  168622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:42:30.711525  168622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:42:30.723934  168622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:42:30.736933  168622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:42:30.750400  168622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:42:30.764737  168622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:42:30.787708  168622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:42:30.800558  168622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:42:30.811511  168622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 14:42:30.811592  168622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 14:42:30.832336  168622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:42:30.844571  168622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:42:30.989065  168622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 14:42:31.097213  168622 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 14:42:31.097285  168622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 14:42:31.102753  168622 start.go:564] Will wait 60s for crictl version
	I1213 14:42:31.102853  168622 ssh_runner.go:195] Run: which crictl
	I1213 14:42:31.106948  168622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 14:42:31.140280  168622 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 14:42:31.140378  168622 ssh_runner.go:195] Run: crio --version
	I1213 14:42:31.169018  168622 ssh_runner.go:195] Run: crio --version
	I1213 14:42:31.200391  168622 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 14:42:31.204832  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:31.205276  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:31.205302  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:31.205525  168622 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 14:42:31.210353  168622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 14:42:31.227888  168622 kubeadm.go:884] updating cluster {Name:test-preload-820437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-820437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:42:31.228009  168622 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 14:42:31.228048  168622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:42:31.264167  168622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1213 14:42:31.264238  168622 ssh_runner.go:195] Run: which lz4
	I1213 14:42:31.268618  168622 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 14:42:31.273349  168622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 14:42:31.273382  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1213 14:42:32.525883  168622 crio.go:462] duration metric: took 1.257295837s to copy over tarball
	I1213 14:42:32.525973  168622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 14:42:34.048766  168622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.52276086s)
	I1213 14:42:34.048800  168622 crio.go:469] duration metric: took 1.522883029s to extract the tarball
	I1213 14:42:34.048811  168622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 14:42:34.087414  168622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:42:34.132175  168622 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 14:42:34.132202  168622 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:42:34.132211  168622 kubeadm.go:935] updating node { 192.168.39.109 8443 v1.34.2 crio true true} ...
	I1213 14:42:34.132333  168622 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-820437 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-820437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:42:34.132438  168622 ssh_runner.go:195] Run: crio config
	I1213 14:42:34.180640  168622 cni.go:84] Creating CNI manager for ""
	I1213 14:42:34.180677  168622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:42:34.180706  168622 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:42:34.180752  168622 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-820437 NodeName:test-preload-820437 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:42:34.180910  168622 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-820437"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.109"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:42:34.180996  168622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 14:42:34.193583  168622 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:42:34.193685  168622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:42:34.205889  168622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1213 14:42:34.227251  168622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 14:42:34.248444  168622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1213 14:42:34.269847  168622 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I1213 14:42:34.274131  168622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 14:42:34.288913  168622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:42:34.430069  168622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:42:34.469772  168622 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437 for IP: 192.168.39.109
	I1213 14:42:34.469813  168622 certs.go:195] generating shared ca certs ...
	I1213 14:42:34.469841  168622 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:42:34.470054  168622 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
	I1213 14:42:34.470181  168622 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
	I1213 14:42:34.470200  168622 certs.go:257] generating profile certs ...
	I1213 14:42:34.470338  168622 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/client.key
	I1213 14:42:34.470424  168622 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/apiserver.key.747f05ba
	I1213 14:42:34.470489  168622 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/proxy-client.key
	I1213 14:42:34.470649  168622 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem (1338 bytes)
	W1213 14:42:34.470695  168622 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234_empty.pem, impossibly tiny 0 bytes
	I1213 14:42:34.470711  168622 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:42:34.470750  168622 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
	I1213 14:42:34.470787  168622 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:42:34.470823  168622 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
	I1213 14:42:34.470887  168622 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 14:42:34.471811  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:42:34.506665  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:42:34.540830  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:42:34.572697  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 14:42:34.603622  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 14:42:34.634806  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:42:34.665499  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:42:34.696510  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:42:34.727949  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:42:34.758309  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem --> /usr/share/ca-certificates/135234.pem (1338 bytes)
	I1213 14:42:34.788923  168622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /usr/share/ca-certificates/1352342.pem (1708 bytes)
	I1213 14:42:34.819216  168622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:42:34.840718  168622 ssh_runner.go:195] Run: openssl version
	I1213 14:42:34.847206  168622 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1352342.pem
	I1213 14:42:34.859512  168622 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1352342.pem /etc/ssl/certs/1352342.pem
	I1213 14:42:34.872255  168622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1352342.pem
	I1213 14:42:34.877992  168622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:00 /usr/share/ca-certificates/1352342.pem
	I1213 14:42:34.878107  168622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1352342.pem
	I1213 14:42:34.886052  168622 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:42:34.898237  168622 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1352342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 14:42:34.910218  168622 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:42:34.922194  168622 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:42:34.933992  168622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:42:34.939257  168622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:42:34.939341  168622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:42:34.946601  168622 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:42:34.958697  168622 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 14:42:34.971004  168622 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135234.pem
	I1213 14:42:34.982713  168622 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135234.pem /etc/ssl/certs/135234.pem
	I1213 14:42:34.994970  168622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135234.pem
	I1213 14:42:35.000343  168622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:00 /usr/share/ca-certificates/135234.pem
	I1213 14:42:35.000447  168622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135234.pem
	I1213 14:42:35.008046  168622 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:42:35.020197  168622 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/135234.pem /etc/ssl/certs/51391683.0
	I1213 14:42:35.032988  168622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:42:35.038755  168622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:42:35.047106  168622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:42:35.055190  168622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:42:35.063142  168622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:42:35.070935  168622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:42:35.078376  168622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:42:35.085932  168622 kubeadm.go:401] StartCluster: {Name:test-preload-820437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-820437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:42:35.086024  168622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 14:42:35.086088  168622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:42:35.120823  168622 cri.go:89] found id: ""
	I1213 14:42:35.120907  168622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:42:35.133115  168622 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:42:35.133137  168622 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:42:35.133191  168622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:42:35.147499  168622 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:42:35.147976  168622 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-820437" does not appear in /home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:42:35.148099  168622 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-131207/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-820437" cluster setting kubeconfig missing "test-preload-820437" context setting]
	I1213 14:42:35.148373  168622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:42:35.148911  168622 kapi.go:59] client config for test-preload-820437: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/client.key", CAFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:42:35.149397  168622 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 14:42:35.149417  168622 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 14:42:35.149425  168622 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 14:42:35.149431  168622 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 14:42:35.149437  168622 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 14:42:35.149845  168622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:42:35.162289  168622 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.109
	I1213 14:42:35.162330  168622 kubeadm.go:1161] stopping kube-system containers ...
	I1213 14:42:35.162344  168622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 14:42:35.162410  168622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:42:35.206286  168622 cri.go:89] found id: ""
	I1213 14:42:35.206371  168622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:42:35.234518  168622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:42:35.247029  168622 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 14:42:35.247048  168622 kubeadm.go:158] found existing configuration files:
	
	I1213 14:42:35.247117  168622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 14:42:35.258634  168622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 14:42:35.258712  168622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 14:42:35.270606  168622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 14:42:35.281673  168622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 14:42:35.281740  168622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:42:35.293997  168622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 14:42:35.305288  168622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 14:42:35.305367  168622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:42:35.317702  168622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 14:42:35.329779  168622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 14:42:35.329851  168622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:42:35.342185  168622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:42:35.354454  168622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:42:35.409388  168622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:42:36.208102  168622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:42:36.444765  168622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:42:36.513067  168622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:42:36.606177  168622 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:42:36.606284  168622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:42:37.107227  168622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:42:37.607164  168622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:42:38.107334  168622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:42:38.606747  168622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:42:39.107259  168622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:42:39.130778  168622 api_server.go:72] duration metric: took 2.52461625s to wait for apiserver process to appear ...
	I1213 14:42:39.130814  168622 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:42:39.130839  168622 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I1213 14:42:41.147348  168622 api_server.go:279] https://192.168.39.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 14:42:41.147380  168622 api_server.go:103] status: https://192.168.39.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 14:42:41.147402  168622 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I1213 14:42:41.258917  168622 api_server.go:279] https://192.168.39.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:42:41.258964  168622 api_server.go:103] status: https://192.168.39.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:42:41.631557  168622 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I1213 14:42:41.638524  168622 api_server.go:279] https://192.168.39.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:42:41.638555  168622 api_server.go:103] status: https://192.168.39.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:42:42.131189  168622 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I1213 14:42:42.140423  168622 api_server.go:279] https://192.168.39.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:42:42.140453  168622 api_server.go:103] status: https://192.168.39.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:42:42.631161  168622 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I1213 14:42:42.635942  168622 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I1213 14:42:42.644434  168622 api_server.go:141] control plane version: v1.34.2
	I1213 14:42:42.644465  168622 api_server.go:131] duration metric: took 3.513644277s to wait for apiserver health ...
	I1213 14:42:42.644476  168622 cni.go:84] Creating CNI manager for ""
	I1213 14:42:42.644482  168622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:42:42.646185  168622 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 14:42:42.647406  168622 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 14:42:42.661239  168622 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 14:42:42.692760  168622 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:42:42.700898  168622 system_pods.go:59] 7 kube-system pods found
	I1213 14:42:42.700942  168622 system_pods.go:61] "coredns-66bc5c9577-xcqpz" [866639b1-0b46-433f-9541-80f32a7c1a90] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 14:42:42.700951  168622 system_pods.go:61] "etcd-test-preload-820437" [c907c025-fd06-4d74-8fe5-b49a6f8f0415] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 14:42:42.700961  168622 system_pods.go:61] "kube-apiserver-test-preload-820437" [ba8aae21-0c6b-4a85-8cfd-962274ee3ddc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:42:42.700967  168622 system_pods.go:61] "kube-controller-manager-test-preload-820437" [40df0562-fa7b-458b-809a-03f0c66d298a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:42:42.700974  168622 system_pods.go:61] "kube-proxy-lnm45" [40bc9500-fe3e-4b26-b275-80260a232b76] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 14:42:42.700979  168622 system_pods.go:61] "kube-scheduler-test-preload-820437" [f06e2c5e-b778-444a-80da-c879f6b7c7df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:42:42.700987  168622 system_pods.go:61] "storage-provisioner" [150c6bf0-d952-4939-bdd9-110279801a25] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 14:42:42.700994  168622 system_pods.go:74] duration metric: took 8.211995ms to wait for pod list to return data ...
	I1213 14:42:42.701001  168622 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:42:42.706018  168622 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:42:42.706051  168622 node_conditions.go:123] node cpu capacity is 2
	I1213 14:42:42.706087  168622 node_conditions.go:105] duration metric: took 5.080822ms to run NodePressure ...
	I1213 14:42:42.706141  168622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:42:43.003148  168622 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1213 14:42:43.006894  168622 kubeadm.go:744] kubelet initialised
	I1213 14:42:43.006928  168622 kubeadm.go:745] duration metric: took 3.751385ms waiting for restarted kubelet to initialise ...
	I1213 14:42:43.006949  168622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 14:42:43.021855  168622 ops.go:34] apiserver oom_adj: -16
	I1213 14:42:43.021897  168622 kubeadm.go:602] duration metric: took 7.888752947s to restartPrimaryControlPlane
	I1213 14:42:43.021908  168622 kubeadm.go:403] duration metric: took 7.935989166s to StartCluster
	I1213 14:42:43.021930  168622 settings.go:142] acquiring lock: {Name:mk721202c5d0c56fb9fb8fa9c13a73c8448f716f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:42:43.022004  168622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:42:43.022598  168622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:42:43.022828  168622 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 14:42:43.022904  168622 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:42:43.023015  168622 addons.go:70] Setting storage-provisioner=true in profile "test-preload-820437"
	I1213 14:42:43.023038  168622 addons.go:239] Setting addon storage-provisioner=true in "test-preload-820437"
	W1213 14:42:43.023051  168622 addons.go:248] addon storage-provisioner should already be in state true
	I1213 14:42:43.023063  168622 addons.go:70] Setting default-storageclass=true in profile "test-preload-820437"
	I1213 14:42:43.023094  168622 host.go:66] Checking if "test-preload-820437" exists ...
	I1213 14:42:43.023096  168622 config.go:182] Loaded profile config "test-preload-820437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:42:43.023100  168622 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-820437"
	I1213 14:42:43.025329  168622 out.go:179] * Verifying Kubernetes components...
	I1213 14:42:43.025371  168622 kapi.go:59] client config for test-preload-820437: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/client.key", CAFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:42:43.025643  168622 addons.go:239] Setting addon default-storageclass=true in "test-preload-820437"
	W1213 14:42:43.025659  168622 addons.go:248] addon default-storageclass should already be in state true
	I1213 14:42:43.025685  168622 host.go:66] Checking if "test-preload-820437" exists ...
	I1213 14:42:43.026490  168622 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 14:42:43.026537  168622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:42:43.027043  168622 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 14:42:43.027059  168622 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 14:42:43.027640  168622 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:42:43.027653  168622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 14:42:43.029850  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:43.030250  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:43.030274  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:43.030406  168622 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/test-preload-820437/id_rsa Username:docker}
	I1213 14:42:43.030542  168622 main.go:143] libmachine: domain test-preload-820437 has defined MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:43.031068  168622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:8a:9f", ip: ""} in network mk-test-preload-820437: {Iface:virbr1 ExpiryTime:2025-12-13 15:42:25 +0000 UTC Type:0 Mac:52:54:00:04:8a:9f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:test-preload-820437 Clientid:01:52:54:00:04:8a:9f}
	I1213 14:42:43.031127  168622 main.go:143] libmachine: domain test-preload-820437 has defined IP address 192.168.39.109 and MAC address 52:54:00:04:8a:9f in network mk-test-preload-820437
	I1213 14:42:43.031303  168622 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/test-preload-820437/id_rsa Username:docker}
	I1213 14:42:43.229018  168622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:42:43.255833  168622 node_ready.go:35] waiting up to 6m0s for node "test-preload-820437" to be "Ready" ...
	I1213 14:42:43.415031  168622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:42:43.421380  168622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:42:44.075016  168622 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 14:42:44.076284  168622 addons.go:530] duration metric: took 1.05338764s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1213 14:42:45.260494  168622 node_ready.go:57] node "test-preload-820437" has "Ready":"False" status (will retry)
	W1213 14:42:47.759349  168622 node_ready.go:57] node "test-preload-820437" has "Ready":"False" status (will retry)
	W1213 14:42:49.761047  168622 node_ready.go:57] node "test-preload-820437" has "Ready":"False" status (will retry)
	I1213 14:42:51.759068  168622 node_ready.go:49] node "test-preload-820437" is "Ready"
	I1213 14:42:51.759120  168622 node_ready.go:38] duration metric: took 8.503231286s for node "test-preload-820437" to be "Ready" ...
	I1213 14:42:51.759135  168622 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:42:51.759194  168622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:42:51.784733  168622 api_server.go:72] duration metric: took 8.761866725s to wait for apiserver process to appear ...
	I1213 14:42:51.784766  168622 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:42:51.784787  168622 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I1213 14:42:51.790858  168622 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I1213 14:42:51.791861  168622 api_server.go:141] control plane version: v1.34.2
	I1213 14:42:51.791888  168622 api_server.go:131] duration metric: took 7.114402ms to wait for apiserver health ...
	I1213 14:42:51.791899  168622 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:42:51.795855  168622 system_pods.go:59] 7 kube-system pods found
	I1213 14:42:51.795880  168622 system_pods.go:61] "coredns-66bc5c9577-xcqpz" [866639b1-0b46-433f-9541-80f32a7c1a90] Running
	I1213 14:42:51.795886  168622 system_pods.go:61] "etcd-test-preload-820437" [c907c025-fd06-4d74-8fe5-b49a6f8f0415] Running
	I1213 14:42:51.795892  168622 system_pods.go:61] "kube-apiserver-test-preload-820437" [ba8aae21-0c6b-4a85-8cfd-962274ee3ddc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:42:51.795897  168622 system_pods.go:61] "kube-controller-manager-test-preload-820437" [40df0562-fa7b-458b-809a-03f0c66d298a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:42:51.795903  168622 system_pods.go:61] "kube-proxy-lnm45" [40bc9500-fe3e-4b26-b275-80260a232b76] Running
	I1213 14:42:51.795909  168622 system_pods.go:61] "kube-scheduler-test-preload-820437" [f06e2c5e-b778-444a-80da-c879f6b7c7df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:42:51.795913  168622 system_pods.go:61] "storage-provisioner" [150c6bf0-d952-4939-bdd9-110279801a25] Running
	I1213 14:42:51.795919  168622 system_pods.go:74] duration metric: took 4.013236ms to wait for pod list to return data ...
	I1213 14:42:51.795930  168622 default_sa.go:34] waiting for default service account to be created ...
	I1213 14:42:51.798451  168622 default_sa.go:45] found service account: "default"
	I1213 14:42:51.798474  168622 default_sa.go:55] duration metric: took 2.537943ms for default service account to be created ...
	I1213 14:42:51.798484  168622 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 14:42:51.800927  168622 system_pods.go:86] 7 kube-system pods found
	I1213 14:42:51.800954  168622 system_pods.go:89] "coredns-66bc5c9577-xcqpz" [866639b1-0b46-433f-9541-80f32a7c1a90] Running
	I1213 14:42:51.800961  168622 system_pods.go:89] "etcd-test-preload-820437" [c907c025-fd06-4d74-8fe5-b49a6f8f0415] Running
	I1213 14:42:51.800973  168622 system_pods.go:89] "kube-apiserver-test-preload-820437" [ba8aae21-0c6b-4a85-8cfd-962274ee3ddc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:42:51.800979  168622 system_pods.go:89] "kube-controller-manager-test-preload-820437" [40df0562-fa7b-458b-809a-03f0c66d298a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:42:51.800988  168622 system_pods.go:89] "kube-proxy-lnm45" [40bc9500-fe3e-4b26-b275-80260a232b76] Running
	I1213 14:42:51.800994  168622 system_pods.go:89] "kube-scheduler-test-preload-820437" [f06e2c5e-b778-444a-80da-c879f6b7c7df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:42:51.800998  168622 system_pods.go:89] "storage-provisioner" [150c6bf0-d952-4939-bdd9-110279801a25] Running
	I1213 14:42:51.801007  168622 system_pods.go:126] duration metric: took 2.517181ms to wait for k8s-apps to be running ...
	I1213 14:42:51.801013  168622 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 14:42:51.801058  168622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 14:42:51.816984  168622 system_svc.go:56] duration metric: took 15.958506ms WaitForService to wait for kubelet
	I1213 14:42:51.817030  168622 kubeadm.go:587] duration metric: took 8.794168416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:42:51.817056  168622 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:42:51.819737  168622 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:42:51.819759  168622 node_conditions.go:123] node cpu capacity is 2
	I1213 14:42:51.819771  168622 node_conditions.go:105] duration metric: took 2.709744ms to run NodePressure ...
	I1213 14:42:51.819783  168622 start.go:242] waiting for startup goroutines ...
	I1213 14:42:51.819790  168622 start.go:247] waiting for cluster config update ...
	I1213 14:42:51.819800  168622 start.go:256] writing updated cluster config ...
	I1213 14:42:51.820067  168622 ssh_runner.go:195] Run: rm -f paused
	I1213 14:42:51.824904  168622 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 14:42:51.825452  168622 kapi.go:59] client config for test-preload-820437: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/test-preload-820437/client.key", CAFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:42:51.828256  168622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xcqpz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:51.838633  168622 pod_ready.go:94] pod "coredns-66bc5c9577-xcqpz" is "Ready"
	I1213 14:42:51.838664  168622 pod_ready.go:86] duration metric: took 10.382792ms for pod "coredns-66bc5c9577-xcqpz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:51.840934  168622 pod_ready.go:83] waiting for pod "etcd-test-preload-820437" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:51.845226  168622 pod_ready.go:94] pod "etcd-test-preload-820437" is "Ready"
	I1213 14:42:51.845250  168622 pod_ready.go:86] duration metric: took 4.297529ms for pod "etcd-test-preload-820437" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:51.847656  168622 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-820437" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 14:42:53.853426  168622 pod_ready.go:104] pod "kube-apiserver-test-preload-820437" is not "Ready", error: <nil>
	I1213 14:42:55.352812  168622 pod_ready.go:94] pod "kube-apiserver-test-preload-820437" is "Ready"
	I1213 14:42:55.352844  168622 pod_ready.go:86] duration metric: took 3.505159062s for pod "kube-apiserver-test-preload-820437" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:55.354769  168622 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-820437" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:56.861344  168622 pod_ready.go:94] pod "kube-controller-manager-test-preload-820437" is "Ready"
	I1213 14:42:56.861373  168622 pod_ready.go:86] duration metric: took 1.506584499s for pod "kube-controller-manager-test-preload-820437" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:56.863639  168622 pod_ready.go:83] waiting for pod "kube-proxy-lnm45" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:56.868484  168622 pod_ready.go:94] pod "kube-proxy-lnm45" is "Ready"
	I1213 14:42:56.868507  168622 pod_ready.go:86] duration metric: took 4.839205ms for pod "kube-proxy-lnm45" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:57.029973  168622 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-820437" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:57.429865  168622 pod_ready.go:94] pod "kube-scheduler-test-preload-820437" is "Ready"
	I1213 14:42:57.429905  168622 pod_ready.go:86] duration metric: took 399.903703ms for pod "kube-scheduler-test-preload-820437" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:42:57.429923  168622 pod_ready.go:40] duration metric: took 5.604983015s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 14:42:57.475465  168622 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 14:42:57.477756  168622 out.go:179] * Done! kubectl is now configured to use "test-preload-820437" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.243282056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765636978243259463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba987a8b-8674-4e66-9b96-e0abf6230408 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.244309192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fea0529-ae5a-4041-a4f9-d6e1f275ebb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.244564572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fea0529-ae5a-4041-a4f9-d6e1f275ebb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.245049769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f941d3db535a3279cd3a56c222e2fc100c980c80046fda989daf39e9904c8d7e,PodSandboxId:997cecf8e3e133f3454aa4b29934d84c4131fa5fe901a1c318277bfa45d2c76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765636969615166095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xcqpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866639b1-0b46-433f-9541-80f32a7c1a90,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfef1043ae5f454de144a2db7dcba8cb982072821875cd0a0441b997b04ff6c,PodSandboxId:0435542d3632aff30dda3623d1c0faaa6b822cb7662c8e8a14cba5503b07106f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765636961964033286,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnm45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40bc9500-fe3e-4b26-b275-80260a232b76,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998828f5d889c112d460986f3815215346e773adf1925960117732a1e7a7fe5a,PodSandboxId:5f98347c5572719424479740473c85b0bd889533e0d7c4bd4a309b30023006eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765636961963456401,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150c6bf0-d952-4939-bdd9-110279801a25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946cb255ffd6d4501d49342934c82daa8958ffb6b363903f84b5f4e15ce7b7da,PodSandboxId:66302831fb293ff71409bab781cbc4fa174158fc41d5b3566d31f95807fca12a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765636958473953006,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 719f5ded4af4d2c3050922de7b667baa,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85917ea5f2eef914980f3a3c1999dd28960ad7cc1e133382ef8a72369d5ac483,PodSandboxId:6b7a6c84b953221e6b0929e5d1213989679c3624c748371a6d696730b4d62c1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d
2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765636958445675159,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f14c541259b1fb3c7c7c4b4ad0acdd,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b74b6ca6e7315ca2bcdacc03a86d552854ca1d800532f9359ba9cb772bfb1c0,PodSandboxId:ea9c8ea8c35c09a6a7139d7ac5d4acd3b835635288f0a26d7e9d2971db187473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765636958429765792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f649b636dd735490a671241555234563,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1412e7a14484a25c31bf8f00087fdf31d9e2620520036850f444d9de532e147,PodSandboxId:2538d24d653b6c165671179419095c0ab28cc2f257ae47f2b4103ddb87853e45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765636958370671251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e44bcf84ed93a4630330c7787d7bba,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fea0529-ae5a-4041-a4f9-d6e1f275ebb9 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.279055569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae2c6069-9833-4ff7-b5c7-05eb146e4830 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.279469868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae2c6069-9833-4ff7-b5c7-05eb146e4830 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.281052963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c6b297e-6f27-4420-9635-955caecf0c19 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.281726465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765636978281703358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c6b297e-6f27-4420-9635-955caecf0c19 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.282732664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73feb951-dab7-4b77-822d-e749cd6e8842 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.282870977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73feb951-dab7-4b77-822d-e749cd6e8842 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.283122240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f941d3db535a3279cd3a56c222e2fc100c980c80046fda989daf39e9904c8d7e,PodSandboxId:997cecf8e3e133f3454aa4b29934d84c4131fa5fe901a1c318277bfa45d2c76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765636969615166095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xcqpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866639b1-0b46-433f-9541-80f32a7c1a90,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfef1043ae5f454de144a2db7dcba8cb982072821875cd0a0441b997b04ff6c,PodSandboxId:0435542d3632aff30dda3623d1c0faaa6b822cb7662c8e8a14cba5503b07106f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765636961964033286,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnm45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40bc9500-fe3e-4b26-b275-80260a232b76,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998828f5d889c112d460986f3815215346e773adf1925960117732a1e7a7fe5a,PodSandboxId:5f98347c5572719424479740473c85b0bd889533e0d7c4bd4a309b30023006eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765636961963456401,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150c6bf0-d952-4939-bdd9-110279801a25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946cb255ffd6d4501d49342934c82daa8958ffb6b363903f84b5f4e15ce7b7da,PodSandboxId:66302831fb293ff71409bab781cbc4fa174158fc41d5b3566d31f95807fca12a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765636958473953006,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 719f5ded4af4d2c3050922de7b667baa,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85917ea5f2eef914980f3a3c1999dd28960ad7cc1e133382ef8a72369d5ac483,PodSandboxId:6b7a6c84b953221e6b0929e5d1213989679c3624c748371a6d696730b4d62c1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d
2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765636958445675159,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f14c541259b1fb3c7c7c4b4ad0acdd,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b74b6ca6e7315ca2bcdacc03a86d552854ca1d800532f9359ba9cb772bfb1c0,PodSandboxId:ea9c8ea8c35c09a6a7139d7ac5d4acd3b835635288f0a26d7e9d2971db187473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765636958429765792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f649b636dd735490a671241555234563,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1412e7a14484a25c31bf8f00087fdf31d9e2620520036850f444d9de532e147,PodSandboxId:2538d24d653b6c165671179419095c0ab28cc2f257ae47f2b4103ddb87853e45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765636958370671251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e44bcf84ed93a4630330c7787d7bba,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73feb951-dab7-4b77-822d-e749cd6e8842 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.314927767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54cac5d8-004b-41f0-b282-421d0843a6e9 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.315019631Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54cac5d8-004b-41f0-b282-421d0843a6e9 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.318647589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54d3ee09-ef8a-40aa-935c-7d02f881f7b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.319197399Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765636978319173956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54d3ee09-ef8a-40aa-935c-7d02f881f7b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.320161654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a528fa31-6e8d-457c-af49-717b73b3de53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.320225592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a528fa31-6e8d-457c-af49-717b73b3de53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.320398369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f941d3db535a3279cd3a56c222e2fc100c980c80046fda989daf39e9904c8d7e,PodSandboxId:997cecf8e3e133f3454aa4b29934d84c4131fa5fe901a1c318277bfa45d2c76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765636969615166095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xcqpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866639b1-0b46-433f-9541-80f32a7c1a90,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfef1043ae5f454de144a2db7dcba8cb982072821875cd0a0441b997b04ff6c,PodSandboxId:0435542d3632aff30dda3623d1c0faaa6b822cb7662c8e8a14cba5503b07106f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765636961964033286,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnm45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40bc9500-fe3e-4b26-b275-80260a232b76,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998828f5d889c112d460986f3815215346e773adf1925960117732a1e7a7fe5a,PodSandboxId:5f98347c5572719424479740473c85b0bd889533e0d7c4bd4a309b30023006eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765636961963456401,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150c6bf0-d952-4939-bdd9-110279801a25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946cb255ffd6d4501d49342934c82daa8958ffb6b363903f84b5f4e15ce7b7da,PodSandboxId:66302831fb293ff71409bab781cbc4fa174158fc41d5b3566d31f95807fca12a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765636958473953006,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 719f5ded4af4d2c3050922de7b667baa,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85917ea5f2eef914980f3a3c1999dd28960ad7cc1e133382ef8a72369d5ac483,PodSandboxId:6b7a6c84b953221e6b0929e5d1213989679c3624c748371a6d696730b4d62c1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d
2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765636958445675159,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f14c541259b1fb3c7c7c4b4ad0acdd,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b74b6ca6e7315ca2bcdacc03a86d552854ca1d800532f9359ba9cb772bfb1c0,PodSandboxId:ea9c8ea8c35c09a6a7139d7ac5d4acd3b835635288f0a26d7e9d2971db187473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765636958429765792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f649b636dd735490a671241555234563,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1412e7a14484a25c31bf8f00087fdf31d9e2620520036850f444d9de532e147,PodSandboxId:2538d24d653b6c165671179419095c0ab28cc2f257ae47f2b4103ddb87853e45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765636958370671251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e44bcf84ed93a4630330c7787d7bba,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a528fa31-6e8d-457c-af49-717b73b3de53 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.348119124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef63a9c0-0d9e-4d91-aed3-cffd4689b23e name=/runtime.v1.RuntimeService/Version
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.348398809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef63a9c0-0d9e-4d91-aed3-cffd4689b23e name=/runtime.v1.RuntimeService/Version
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.349805886Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce15744c-e1c0-4153-8bfc-2346e09eedd9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.350238410Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765636978350218083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce15744c-e1c0-4153-8bfc-2346e09eedd9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.351298617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94692bc5-432d-4972-a680-184f6fc9c230 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.351347156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94692bc5-432d-4972-a680-184f6fc9c230 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:42:58 test-preload-820437 crio[832]: time="2025-12-13 14:42:58.351574832Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f941d3db535a3279cd3a56c222e2fc100c980c80046fda989daf39e9904c8d7e,PodSandboxId:997cecf8e3e133f3454aa4b29934d84c4131fa5fe901a1c318277bfa45d2c76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765636969615166095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xcqpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 866639b1-0b46-433f-9541-80f32a7c1a90,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfef1043ae5f454de144a2db7dcba8cb982072821875cd0a0441b997b04ff6c,PodSandboxId:0435542d3632aff30dda3623d1c0faaa6b822cb7662c8e8a14cba5503b07106f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765636961964033286,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnm45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40bc9500-fe3e-4b26-b275-80260a232b76,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998828f5d889c112d460986f3815215346e773adf1925960117732a1e7a7fe5a,PodSandboxId:5f98347c5572719424479740473c85b0bd889533e0d7c4bd4a309b30023006eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765636961963456401,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150c6bf0-d952-4939-bdd9-110279801a25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946cb255ffd6d4501d49342934c82daa8958ffb6b363903f84b5f4e15ce7b7da,PodSandboxId:66302831fb293ff71409bab781cbc4fa174158fc41d5b3566d31f95807fca12a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765636958473953006,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 719f5ded4af4d2c3050922de7b667baa,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85917ea5f2eef914980f3a3c1999dd28960ad7cc1e133382ef8a72369d5ac483,PodSandboxId:6b7a6c84b953221e6b0929e5d1213989679c3624c748371a6d696730b4d62c1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d
2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765636958445675159,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f14c541259b1fb3c7c7c4b4ad0acdd,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b74b6ca6e7315ca2bcdacc03a86d552854ca1d800532f9359ba9cb772bfb1c0,PodSandboxId:ea9c8ea8c35c09a6a7139d7ac5d4acd3b835635288f0a26d7e9d2971db187473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765636958429765792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f649b636dd735490a671241555234563,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1412e7a14484a25c31bf8f00087fdf31d9e2620520036850f444d9de532e147,PodSandboxId:2538d24d653b6c165671179419095c0ab28cc2f257ae47f2b4103ddb87853e45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765636958370671251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-820437,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e44bcf84ed93a4630330c7787d7bba,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94692bc5-432d-4972-a680-184f6fc9c230 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	f941d3db535a3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   8 seconds ago       Running             coredns                   1                   997cecf8e3e13       coredns-66bc5c9577-xcqpz                      kube-system
	abfef1043ae5f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                1                   0435542d3632a       kube-proxy-lnm45                              kube-system
	998828f5d889c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       2                   5f98347c55727       storage-provisioner                           kube-system
	946cb255ffd6d       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   19 seconds ago      Running             kube-apiserver            1                   66302831fb293       kube-apiserver-test-preload-820437            kube-system
	85917ea5f2eef       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   19 seconds ago      Running             kube-controller-manager   1                   6b7a6c84b9532       kube-controller-manager-test-preload-820437   kube-system
	3b74b6ca6e731       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago      Running             etcd                      1                   ea9c8ea8c35c0       etcd-test-preload-820437                      kube-system
	a1412e7a14484       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago      Running             kube-scheduler            1                   2538d24d653b6       kube-scheduler-test-preload-820437            kube-system
	
	
	==> coredns [f941d3db535a3279cd3a56c222e2fc100c980c80046fda989daf39e9904c8d7e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44107 - 61463 "HINFO IN 22486753608305176.4677795984033360580. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027338301s
	
	
	==> describe nodes <==
	Name:               test-preload-820437
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-820437
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=test-preload-820437
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T14_41_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 14:41:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-820437
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 14:42:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 14:42:51 +0000   Sat, 13 Dec 2025 14:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 14:42:51 +0000   Sat, 13 Dec 2025 14:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 14:42:51 +0000   Sat, 13 Dec 2025 14:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 14:42:51 +0000   Sat, 13 Dec 2025 14:42:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    test-preload-820437
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 413ccd6869724b428c19e40fbcb71a84
	  System UUID:                413ccd68-6972-4b42-8c19-e40fbcb71a84
	  Boot ID:                    e3487fa8-de67-4aee-8d45-10bf8c1d2fd2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xcqpz                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     90s
	  kube-system                 etcd-test-preload-820437                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         96s
	  kube-system                 kube-apiserver-test-preload-820437             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-test-preload-820437    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-lnm45                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-test-preload-820437             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 89s                  kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Normal   Starting                 102s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node test-preload-820437 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node test-preload-820437 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node test-preload-820437 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    96s                  kubelet          Node test-preload-820437 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  96s                  kubelet          Node test-preload-820437 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     96s                  kubelet          Node test-preload-820437 status is now: NodeHasSufficientPID
	  Normal   Starting                 96s                  kubelet          Starting kubelet.
	  Normal   NodeReady                95s                  kubelet          Node test-preload-820437 status is now: NodeReady
	  Normal   RegisteredNode           92s                  node-controller  Node test-preload-820437 event: Registered Node test-preload-820437 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-820437 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-820437 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-820437 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-820437 has been rebooted, boot id: e3487fa8-de67-4aee-8d45-10bf8c1d2fd2
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-820437 event: Registered Node test-preload-820437 in Controller
	
	
	==> dmesg <==
	[Dec13 14:42] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000622] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.947776] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.115654] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.095502] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.465447] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.001183] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.035436] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [3b74b6ca6e7315ca2bcdacc03a86d552854ca1d800532f9359ba9cb772bfb1c0] <==
	{"level":"warn","ts":"2025-12-13T14:42:40.283431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.310128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.319172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.332899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.343751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.355572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.362296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.372808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.383763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.398732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.402424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.411818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.419366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.427157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.436011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.444521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.454104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.460025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.467710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.475595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.486630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.497468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.506911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.515998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:42:40.588004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44400","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:42:58 up 0 min,  0 users,  load average: 1.00, 0.27, 0.09
	Linux test-preload-820437 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 13 11:18:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [946cb255ffd6d4501d49342934c82daa8958ffb6b363903f84b5f4e15ce7b7da] <==
	I1213 14:42:41.245987       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 14:42:41.247562       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 14:42:41.256148       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 14:42:41.256206       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 14:42:41.269900       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 14:42:41.269988       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 14:42:41.270479       1 aggregator.go:171] initial CRD sync complete...
	I1213 14:42:41.270539       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 14:42:41.270546       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 14:42:41.270552       1 cache.go:39] Caches are synced for autoregister controller
	I1213 14:42:41.270735       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 14:42:41.276288       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1213 14:42:41.284026       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 14:42:41.302714       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 14:42:41.302752       1 policy_source.go:240] refreshing policies
	I1213 14:42:41.329382       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 14:42:41.581126       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 14:42:42.123864       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 14:42:42.841419       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 14:42:42.882188       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 14:42:42.919335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 14:42:42.927283       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 14:42:44.594969       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 14:42:44.932513       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 14:42:44.985222       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [85917ea5f2eef914980f3a3c1999dd28960ad7cc1e133382ef8a72369d5ac483] <==
	I1213 14:42:44.573639       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 14:42:44.578351       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 14:42:44.578806       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 14:42:44.579866       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 14:42:44.580630       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 14:42:44.580642       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 14:42:44.580649       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 14:42:44.580755       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 14:42:44.580767       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 14:42:44.581568       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 14:42:44.581717       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 14:42:44.582524       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 14:42:44.585682       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 14:42:44.586471       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 14:42:44.588031       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 14:42:44.590168       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 14:42:44.590390       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 14:42:44.591610       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 14:42:44.591651       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 14:42:44.594272       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 14:42:44.595905       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 14:42:44.598579       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 14:42:44.604339       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 14:42:44.630010       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 14:42:54.562915       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [abfef1043ae5f454de144a2db7dcba8cb982072821875cd0a0441b997b04ff6c] <==
	I1213 14:42:42.202866       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 14:42:42.303029       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 14:42:42.303216       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.109"]
	E1213 14:42:42.303404       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 14:42:42.476956       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 14:42:42.477142       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 14:42:42.477254       1 server_linux.go:132] "Using iptables Proxier"
	I1213 14:42:42.495413       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 14:42:42.495903       1 server.go:527] "Version info" version="v1.34.2"
	I1213 14:42:42.495929       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:42:42.511717       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 14:42:42.512028       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 14:42:42.511971       1 config.go:200] "Starting service config controller"
	I1213 14:42:42.512217       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 14:42:42.512021       1 config.go:106] "Starting endpoint slice config controller"
	I1213 14:42:42.512231       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 14:42:42.514336       1 config.go:309] "Starting node config controller"
	I1213 14:42:42.515579       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 14:42:42.515606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 14:42:42.612823       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 14:42:42.612937       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 14:42:42.612950       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a1412e7a14484a25c31bf8f00087fdf31d9e2620520036850f444d9de532e147] <==
	I1213 14:42:39.645942       1 serving.go:386] Generated self-signed cert in-memory
	W1213 14:42:41.174240       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 14:42:41.174329       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 14:42:41.174340       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 14:42:41.174347       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 14:42:41.249679       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 14:42:41.249719       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:42:41.254467       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 14:42:41.254570       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 14:42:41.254198       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 14:42:41.256158       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 14:42:41.357334       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: I1213 14:42:41.524561    1188 apiserver.go:52] "Watching apiserver"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: E1213 14:42:41.528561    1188 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-xcqpz" podUID="866639b1-0b46-433f-9541-80f32a7c1a90"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: I1213 14:42:41.540391    1188 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: I1213 14:42:41.576991    1188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40bc9500-fe3e-4b26-b275-80260a232b76-xtables-lock\") pod \"kube-proxy-lnm45\" (UID: \"40bc9500-fe3e-4b26-b275-80260a232b76\") " pod="kube-system/kube-proxy-lnm45"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: I1213 14:42:41.577147    1188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/150c6bf0-d952-4939-bdd9-110279801a25-tmp\") pod \"storage-provisioner\" (UID: \"150c6bf0-d952-4939-bdd9-110279801a25\") " pod="kube-system/storage-provisioner"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: I1213 14:42:41.577226    1188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40bc9500-fe3e-4b26-b275-80260a232b76-lib-modules\") pod \"kube-proxy-lnm45\" (UID: \"40bc9500-fe3e-4b26-b275-80260a232b76\") " pod="kube-system/kube-proxy-lnm45"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: E1213 14:42:41.577742    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: E1213 14:42:41.578539    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/866639b1-0b46-433f-9541-80f32a7c1a90-config-volume podName:866639b1-0b46-433f-9541-80f32a7c1a90 nodeName:}" failed. No retries permitted until 2025-12-13 14:42:42.078519157 +0000 UTC m=+5.646283929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/866639b1-0b46-433f-9541-80f32a7c1a90-config-volume") pod "coredns-66bc5c9577-xcqpz" (UID: "866639b1-0b46-433f-9541-80f32a7c1a90") : object "kube-system"/"coredns" not registered
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: E1213 14:42:41.596244    1188 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: I1213 14:42:41.694968    1188 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-820437"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: I1213 14:42:41.696302    1188 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-820437"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: E1213 14:42:41.707535    1188 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-820437\" already exists" pod="kube-system/etcd-test-preload-820437"
	Dec 13 14:42:41 test-preload-820437 kubelet[1188]: E1213 14:42:41.709143    1188 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-820437\" already exists" pod="kube-system/kube-apiserver-test-preload-820437"
	Dec 13 14:42:42 test-preload-820437 kubelet[1188]: E1213 14:42:42.081633    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 14:42:42 test-preload-820437 kubelet[1188]: E1213 14:42:42.081776    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/866639b1-0b46-433f-9541-80f32a7c1a90-config-volume podName:866639b1-0b46-433f-9541-80f32a7c1a90 nodeName:}" failed. No retries permitted until 2025-12-13 14:42:43.081762013 +0000 UTC m=+6.649526776 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/866639b1-0b46-433f-9541-80f32a7c1a90-config-volume") pod "coredns-66bc5c9577-xcqpz" (UID: "866639b1-0b46-433f-9541-80f32a7c1a90") : object "kube-system"/"coredns" not registered
	Dec 13 14:42:43 test-preload-820437 kubelet[1188]: E1213 14:42:43.088688    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 14:42:43 test-preload-820437 kubelet[1188]: E1213 14:42:43.088781    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/866639b1-0b46-433f-9541-80f32a7c1a90-config-volume podName:866639b1-0b46-433f-9541-80f32a7c1a90 nodeName:}" failed. No retries permitted until 2025-12-13 14:42:45.088767289 +0000 UTC m=+8.656532052 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/866639b1-0b46-433f-9541-80f32a7c1a90-config-volume") pod "coredns-66bc5c9577-xcqpz" (UID: "866639b1-0b46-433f-9541-80f32a7c1a90") : object "kube-system"/"coredns" not registered
	Dec 13 14:42:43 test-preload-820437 kubelet[1188]: E1213 14:42:43.601319    1188 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-xcqpz" podUID="866639b1-0b46-433f-9541-80f32a7c1a90"
	Dec 13 14:42:45 test-preload-820437 kubelet[1188]: E1213 14:42:45.104279    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 14:42:45 test-preload-820437 kubelet[1188]: E1213 14:42:45.104343    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/866639b1-0b46-433f-9541-80f32a7c1a90-config-volume podName:866639b1-0b46-433f-9541-80f32a7c1a90 nodeName:}" failed. No retries permitted until 2025-12-13 14:42:49.104330996 +0000 UTC m=+12.672095747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/866639b1-0b46-433f-9541-80f32a7c1a90-config-volume") pod "coredns-66bc5c9577-xcqpz" (UID: "866639b1-0b46-433f-9541-80f32a7c1a90") : object "kube-system"/"coredns" not registered
	Dec 13 14:42:45 test-preload-820437 kubelet[1188]: E1213 14:42:45.601154    1188 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-xcqpz" podUID="866639b1-0b46-433f-9541-80f32a7c1a90"
	Dec 13 14:42:46 test-preload-820437 kubelet[1188]: E1213 14:42:46.593304    1188 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765636966592886023 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 13 14:42:46 test-preload-820437 kubelet[1188]: E1213 14:42:46.593327    1188 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765636966592886023 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 13 14:42:56 test-preload-820437 kubelet[1188]: E1213 14:42:56.596364    1188 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765636976596038035 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 13 14:42:56 test-preload-820437 kubelet[1188]: E1213 14:42:56.596383    1188 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765636976596038035 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [998828f5d889c112d460986f3815215346e773adf1925960117732a1e7a7fe5a] <==
	I1213 14:42:42.083896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-820437 -n test-preload-820437
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-820437 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-820437" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-820437
--- FAIL: TestPreload (143.70s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (82.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-711635 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-711635 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.289191705s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-711635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-711635" primary control-plane node in "pause-711635" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-711635" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:46:26.319050  171187 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:46:26.319279  171187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:46:26.319295  171187 out.go:374] Setting ErrFile to fd 2...
	I1213 14:46:26.319301  171187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:46:26.319620  171187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:46:26.320341  171187 out.go:368] Setting JSON to false
	I1213 14:46:26.321731  171187 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8926,"bootTime":1765628260,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 14:46:26.321813  171187 start.go:143] virtualization: kvm guest
	I1213 14:46:26.326061  171187 out.go:179] * [pause-711635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 14:46:26.329674  171187 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:46:26.329700  171187 notify.go:221] Checking for updates...
	I1213 14:46:26.332401  171187 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:46:26.333938  171187 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:46:26.335366  171187 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:46:26.336654  171187 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 14:46:26.337842  171187 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:46:26.339839  171187 config.go:182] Loaded profile config "pause-711635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:46:26.340637  171187 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:46:26.374512  171187 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 14:46:26.375776  171187 start.go:309] selected driver: kvm2
	I1213 14:46:26.375799  171187 start.go:927] validating driver "kvm2" against &{Name:pause-711635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-711635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:46:26.376000  171187 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:46:26.377519  171187 cni.go:84] Creating CNI manager for ""
	I1213 14:46:26.377608  171187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:46:26.377711  171187 start.go:353] cluster config:
	{Name:pause-711635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-711635 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:46:26.377887  171187 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:46:26.379416  171187 out.go:179] * Starting "pause-711635" primary control-plane node in "pause-711635" cluster
	I1213 14:46:26.380422  171187 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 14:46:26.380470  171187 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 14:46:26.380488  171187 cache.go:65] Caching tarball of preloaded images
	I1213 14:46:26.380587  171187 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 14:46:26.380601  171187 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 14:46:26.380757  171187 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/config.json ...
	I1213 14:46:26.381020  171187 start.go:360] acquireMachinesLock for pause-711635: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 14:46:51.239640  171187 start.go:364] duration metric: took 24.858543597s to acquireMachinesLock for "pause-711635"
	I1213 14:46:51.239695  171187 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:46:51.239703  171187 fix.go:54] fixHost starting: 
	I1213 14:46:51.242531  171187 fix.go:112] recreateIfNeeded on pause-711635: state=Running err=<nil>
	W1213 14:46:51.242577  171187 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:46:51.244249  171187 out.go:252] * Updating the running kvm2 "pause-711635" VM ...
	I1213 14:46:51.244281  171187 machine.go:94] provisionDockerMachine start ...
	I1213 14:46:51.248476  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.249067  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:51.249122  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.249470  171187 main.go:143] libmachine: Using SSH client type: native
	I1213 14:46:51.249798  171187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I1213 14:46:51.249824  171187 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:46:51.361909  171187 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-711635
	
	I1213 14:46:51.386185  171187 buildroot.go:166] provisioning hostname "pause-711635"
	I1213 14:46:51.389553  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.390019  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:51.390053  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.390304  171187 main.go:143] libmachine: Using SSH client type: native
	I1213 14:46:51.390538  171187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I1213 14:46:51.390553  171187 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-711635 && echo "pause-711635" | sudo tee /etc/hostname
	I1213 14:46:51.517550  171187 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-711635
	
	I1213 14:46:51.521025  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.521490  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:51.521515  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.521705  171187 main.go:143] libmachine: Using SSH client type: native
	I1213 14:46:51.521970  171187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I1213 14:46:51.521987  171187 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-711635' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-711635/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-711635' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:46:51.640308  171187 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:46:51.640340  171187 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
	I1213 14:46:51.640362  171187 buildroot.go:174] setting up certificates
	I1213 14:46:51.640373  171187 provision.go:84] configureAuth start
	I1213 14:46:51.643972  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.644462  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:51.644493  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.647112  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.647526  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:51.647561  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.647747  171187 provision.go:143] copyHostCerts
	I1213 14:46:51.647806  171187 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem, removing ...
	I1213 14:46:51.647821  171187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 14:46:51.647882  171187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
	I1213 14:46:51.648028  171187 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem, removing ...
	I1213 14:46:51.648041  171187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 14:46:51.648093  171187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
	I1213 14:46:51.648178  171187 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem, removing ...
	I1213 14:46:51.648187  171187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 14:46:51.648219  171187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
	I1213 14:46:51.648305  171187 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.pause-711635 san=[127.0.0.1 192.168.50.50 localhost minikube pause-711635]
	I1213 14:46:51.849935  171187 provision.go:177] copyRemoteCerts
	I1213 14:46:51.850040  171187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:46:51.853635  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.854162  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:51.854204  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:51.854391  171187 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/pause-711635/id_rsa Username:docker}
	I1213 14:46:51.945115  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 14:46:51.988530  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 14:46:52.026102  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:46:52.064859  171187 provision.go:87] duration metric: took 424.434285ms to configureAuth
	I1213 14:46:52.064902  171187 buildroot.go:189] setting minikube options for container-runtime
	I1213 14:46:52.065244  171187 config.go:182] Loaded profile config "pause-711635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:46:52.068848  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:52.069386  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:52.069413  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:52.069640  171187 main.go:143] libmachine: Using SSH client type: native
	I1213 14:46:52.069889  171187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I1213 14:46:52.069911  171187 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 14:46:57.676811  171187 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 14:46:57.676860  171187 machine.go:97] duration metric: took 6.432560223s to provisionDockerMachine
	I1213 14:46:57.676878  171187 start.go:293] postStartSetup for "pause-711635" (driver="kvm2")
	I1213 14:46:57.676893  171187 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:46:57.676993  171187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:46:57.680436  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.680908  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:57.680936  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.681176  171187 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/pause-711635/id_rsa Username:docker}
	I1213 14:46:57.766692  171187 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:46:57.771364  171187 info.go:137] Remote host: Buildroot 2025.02
	I1213 14:46:57.771393  171187 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
	I1213 14:46:57.771471  171187 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
	I1213 14:46:57.771596  171187 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> 1352342.pem in /etc/ssl/certs
	I1213 14:46:57.771698  171187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 14:46:57.783171  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 14:46:57.817938  171187 start.go:296] duration metric: took 141.04275ms for postStartSetup
	I1213 14:46:57.817986  171187 fix.go:56] duration metric: took 6.57828227s for fixHost
	I1213 14:46:57.821002  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.821503  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:57.821535  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.821739  171187 main.go:143] libmachine: Using SSH client type: native
	I1213 14:46:57.822037  171187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I1213 14:46:57.822056  171187 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 14:46:57.930842  171187 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765637217.927081611
	
	I1213 14:46:57.930882  171187 fix.go:216] guest clock: 1765637217.927081611
	I1213 14:46:57.930890  171187 fix.go:229] Guest: 2025-12-13 14:46:57.927081611 +0000 UTC Remote: 2025-12-13 14:46:57.817991977 +0000 UTC m=+31.556417870 (delta=109.089634ms)
	I1213 14:46:57.930910  171187 fix.go:200] guest clock delta is within tolerance: 109.089634ms
	I1213 14:46:57.930915  171187 start.go:83] releasing machines lock for "pause-711635", held for 6.691242021s
	I1213 14:46:57.934627  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.935273  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:57.935298  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.936044  171187 ssh_runner.go:195] Run: cat /version.json
	I1213 14:46:57.936139  171187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:46:57.940815  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.940920  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.941339  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:57.941369  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.941436  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:46:57.941466  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:46:57.941542  171187 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/pause-711635/id_rsa Username:docker}
	I1213 14:46:57.941775  171187 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/pause-711635/id_rsa Username:docker}
	I1213 14:46:58.028843  171187 ssh_runner.go:195] Run: systemctl --version
	I1213 14:46:58.053196  171187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 14:46:58.205327  171187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:46:58.217006  171187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:46:58.217086  171187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:46:58.227636  171187 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:46:58.227665  171187 start.go:496] detecting cgroup driver to use...
	I1213 14:46:58.227736  171187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 14:46:58.249333  171187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 14:46:58.275095  171187 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:46:58.275170  171187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:46:58.305216  171187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:46:58.327749  171187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:46:58.548704  171187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:46:58.741193  171187 docker.go:234] disabling docker service ...
	I1213 14:46:58.741297  171187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:46:58.778710  171187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:46:58.794510  171187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:46:59.008165  171187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:46:59.225241  171187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:46:59.247377  171187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:46:59.276892  171187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 14:46:59.276998  171187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:46:59.290971  171187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 14:46:59.291045  171187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:46:59.303259  171187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:46:59.318452  171187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:46:59.335829  171187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:46:59.350212  171187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:46:59.369585  171187 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:46:59.386789  171187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:46:59.404002  171187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:46:59.415464  171187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:46:59.430852  171187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:46:59.648664  171187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 14:46:59.931376  171187 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 14:46:59.931464  171187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 14:46:59.938964  171187 start.go:564] Will wait 60s for crictl version
	I1213 14:46:59.939058  171187 ssh_runner.go:195] Run: which crictl
	I1213 14:46:59.944812  171187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 14:46:59.986119  171187 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 14:46:59.986253  171187 ssh_runner.go:195] Run: crio --version
	I1213 14:47:00.021967  171187 ssh_runner.go:195] Run: crio --version
	I1213 14:47:00.058662  171187 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 14:47:00.063962  171187 main.go:143] libmachine: domain pause-711635 has defined MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:47:00.064597  171187 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:19:16:03", ip: ""} in network mk-pause-711635: {Iface:virbr2 ExpiryTime:2025-12-13 15:45:20 +0000 UTC Type:0 Mac:52:54:00:19:16:03 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-711635 Clientid:01:52:54:00:19:16:03}
	I1213 14:47:00.064635  171187 main.go:143] libmachine: domain pause-711635 has defined IP address 192.168.50.50 and MAC address 52:54:00:19:16:03 in network mk-pause-711635
	I1213 14:47:00.064924  171187 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 14:47:00.070730  171187 kubeadm.go:884] updating cluster {Name:pause-711635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-711635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:47:00.070949  171187 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 14:47:00.071010  171187 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:47:00.122499  171187 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 14:47:00.122534  171187 crio.go:433] Images already preloaded, skipping extraction
	I1213 14:47:00.122599  171187 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:47:00.156606  171187 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 14:47:00.156631  171187 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:47:00.156641  171187 kubeadm.go:935] updating node { 192.168.50.50 8443 v1.34.2 crio true true} ...
	I1213 14:47:00.156809  171187 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-711635 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-711635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:47:00.156919  171187 ssh_runner.go:195] Run: crio config
	I1213 14:47:00.212960  171187 cni.go:84] Creating CNI manager for ""
	I1213 14:47:00.212993  171187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:47:00.213019  171187 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:47:00.213059  171187 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-711635 NodeName:pause-711635 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:47:00.213267  171187 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-711635"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.50"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:47:00.213362  171187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 14:47:00.231881  171187 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:47:00.231969  171187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:47:00.248751  171187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1213 14:47:00.275241  171187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 14:47:00.297746  171187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1213 14:47:00.322369  171187 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I1213 14:47:00.328425  171187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:47:00.683482  171187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:47:00.763981  171187 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635 for IP: 192.168.50.50
	I1213 14:47:00.764008  171187 certs.go:195] generating shared ca certs ...
	I1213 14:47:00.764029  171187 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:00.764264  171187 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
	I1213 14:47:00.764382  171187 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
	I1213 14:47:00.764407  171187 certs.go:257] generating profile certs ...
	I1213 14:47:00.764567  171187 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/client.key
	I1213 14:47:00.764667  171187 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/apiserver.key.94455dde
	I1213 14:47:00.764732  171187 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/proxy-client.key
	I1213 14:47:00.764907  171187 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem (1338 bytes)
	W1213 14:47:00.764967  171187 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234_empty.pem, impossibly tiny 0 bytes
	I1213 14:47:00.764984  171187 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:47:00.765027  171187 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
	I1213 14:47:00.765093  171187 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:47:00.765143  171187 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
	I1213 14:47:00.765245  171187 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 14:47:00.766315  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:47:00.870679  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:47:00.956058  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:47:01.033161  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 14:47:01.075770  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 14:47:01.112189  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 14:47:01.156064  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:47:01.191476  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:47:01.234138  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:47:01.274663  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem --> /usr/share/ca-certificates/135234.pem (1338 bytes)
	I1213 14:47:01.338770  171187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /usr/share/ca-certificates/1352342.pem (1708 bytes)
	I1213 14:47:01.409614  171187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:47:01.435869  171187 ssh_runner.go:195] Run: openssl version
	I1213 14:47:01.447311  171187 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1352342.pem
	I1213 14:47:01.465181  171187 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1352342.pem /etc/ssl/certs/1352342.pem
	I1213 14:47:01.477265  171187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1352342.pem
	I1213 14:47:01.482797  171187 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:00 /usr/share/ca-certificates/1352342.pem
	I1213 14:47:01.482873  171187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1352342.pem
	I1213 14:47:01.493157  171187 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:47:01.529364  171187 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:01.563057  171187 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:47:01.620154  171187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:01.659744  171187 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:01.659899  171187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:01.695947  171187 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:47:01.747717  171187 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135234.pem
	I1213 14:47:01.789545  171187 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135234.pem /etc/ssl/certs/135234.pem
	I1213 14:47:01.841427  171187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135234.pem
	I1213 14:47:01.861471  171187 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:00 /usr/share/ca-certificates/135234.pem
	I1213 14:47:01.861550  171187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135234.pem
	I1213 14:47:01.886938  171187 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:47:01.929707  171187 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:47:01.961679  171187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:47:01.987388  171187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:47:02.016131  171187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:47:02.035434  171187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:47:02.052169  171187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:47:02.074303  171187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:47:02.102647  171187 kubeadm.go:401] StartCluster: {Name:pause-711635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-711635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:47:02.102836  171187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 14:47:02.102940  171187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:47:02.249283  171187 cri.go:89] found id: "0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884"
	I1213 14:47:02.249311  171187 cri.go:89] found id: "55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d"
	I1213 14:47:02.249318  171187 cri.go:89] found id: "7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027"
	I1213 14:47:02.249324  171187 cri.go:89] found id: "a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9"
	I1213 14:47:02.249330  171187 cri.go:89] found id: "84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1"
	I1213 14:47:02.249336  171187 cri.go:89] found id: "4d2a56532056260bdd93e4b14a86c36307ac41998452bb61141f06cc76ec9477"
	I1213 14:47:02.249340  171187 cri.go:89] found id: "f83edb7f56ec7222cb74cca751c4ba220e1a3fcad81dce9e7e241660348b0493"
	I1213 14:47:02.249345  171187 cri.go:89] found id: "8e9ebe98e0ac3acba58f256c082e0de73e7d9385cb6a2521cb98a062713ecdf4"
	I1213 14:47:02.249350  171187 cri.go:89] found id: "82999fbb510eeda7012e606ff9f37bb5d429ce07985955e556746cc183dc17a9"
	I1213 14:47:02.249361  171187 cri.go:89] found id: "d1375c0e9aba7cfa8773de45d54ec8a8d032d9f0666cc081d7ea4d625de4b3bc"
	I1213 14:47:02.249366  171187 cri.go:89] found id: ""
	I1213 14:47:02.249422  171187 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-711635 -n pause-711635
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-711635 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-711635 logs -n 25: (1.312393851s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-305528 --schedule 15s -v=5 --alsologtostderr                                                                                              │ scheduled-stop-305528    │ jenkins │ v1.37.0 │ 13 Dec 25 14:44 UTC │                     │
	│ stop    │ -p scheduled-stop-305528 --schedule 15s -v=5 --alsologtostderr                                                                                              │ scheduled-stop-305528    │ jenkins │ v1.37.0 │ 13 Dec 25 14:44 UTC │ 13 Dec 25 14:44 UTC │
	│ delete  │ -p scheduled-stop-305528                                                                                                                                    │ scheduled-stop-305528    │ jenkins │ v1.37.0 │ 13 Dec 25 14:44 UTC │ 13 Dec 25 14:44 UTC │
	│ start   │ -p offline-crio-196030 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio                                             │ offline-crio-196030      │ jenkins │ v1.37.0 │ 13 Dec 25 14:44 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p pause-711635 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-711635             │ jenkins │ v1.37.0 │ 13 Dec 25 14:44 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p NoKubernetes-303609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                 │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:44 UTC │                     │
	│ start   │ -p NoKubernetes-303609 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                         │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:44 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p running-upgrade-352355 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ running-upgrade-352355   │ jenkins │ v1.35.0 │ 13 Dec 25 14:44 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p NoKubernetes-303609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ delete  │ -p NoKubernetes-303609                                                                                                                                      │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p NoKubernetes-303609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ delete  │ -p offline-crio-196030                                                                                                                                      │ offline-crio-196030      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p stopped-upgrade-729395 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ stopped-upgrade-729395   │ jenkins │ v1.35.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:47 UTC │
	│ start   │ -p running-upgrade-352355 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-352355   │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │                     │
	│ start   │ -p pause-711635 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-711635             │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:47 UTC │
	│ ssh     │ -p NoKubernetes-303609 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │                     │
	│ stop    │ -p NoKubernetes-303609                                                                                                                                      │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p NoKubernetes-303609 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:47 UTC │
	│ stop    │ stopped-upgrade-729395 stop                                                                                                                                 │ stopped-upgrade-729395   │ jenkins │ v1.35.0 │ 13 Dec 25 14:47 UTC │ 13 Dec 25 14:47 UTC │
	│ start   │ -p stopped-upgrade-729395 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-729395   │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │ 13 Dec 25 14:47 UTC │
	│ ssh     │ -p NoKubernetes-303609 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ delete  │ -p NoKubernetes-303609                                                                                                                                      │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │ 13 Dec 25 14:47 UTC │
	│ start   │ -p force-systemd-env-936726 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                    │ force-systemd-env-936726 │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-729395 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-729395   │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ delete  │ -p stopped-upgrade-729395                                                                                                                                   │ stopped-upgrade-729395   │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:47:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:47:16.435660  171994 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:47:16.435753  171994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:47:16.435757  171994 out.go:374] Setting ErrFile to fd 2...
	I1213 14:47:16.435761  171994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:47:16.435972  171994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:47:16.436450  171994 out.go:368] Setting JSON to false
	I1213 14:47:16.437375  171994 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8976,"bootTime":1765628260,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 14:47:16.437436  171994 start.go:143] virtualization: kvm guest
	I1213 14:47:16.439358  171994 out.go:179] * [force-systemd-env-936726] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 14:47:16.440476  171994 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:47:16.440510  171994 notify.go:221] Checking for updates...
	I1213 14:47:16.442650  171994 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:47:16.443914  171994 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:47:16.444905  171994 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:47:16.445910  171994 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 14:47:16.446961  171994 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1213 14:47:16.448749  171994 config.go:182] Loaded profile config "pause-711635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:47:16.448890  171994 config.go:182] Loaded profile config "running-upgrade-352355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 14:47:16.449010  171994 config.go:182] Loaded profile config "stopped-upgrade-729395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 14:47:16.449166  171994 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:47:16.485700  171994 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 14:47:16.486718  171994 start.go:309] selected driver: kvm2
	I1213 14:47:16.486732  171994 start.go:927] validating driver "kvm2" against <nil>
	I1213 14:47:16.486743  171994 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:47:16.487736  171994 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 14:47:16.488028  171994 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 14:47:16.488066  171994 cni.go:84] Creating CNI manager for ""
	I1213 14:47:16.488141  171994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:47:16.488155  171994 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 14:47:16.488206  171994 start.go:353] cluster config:
	{Name:force-systemd-env-936726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-936726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:47:16.488342  171994 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:47:16.489787  171994 out.go:179] * Starting "force-systemd-env-936726" primary control-plane node in "force-systemd-env-936726" cluster
	I1213 14:47:15.815795  171101 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d20d51dfd3ea1c42e76f51de987dc0055b975553a6dfddd0e5bdd05d3801b1cd 8d9cbb53f37f0fdfc06cbf4901840cd15b164d449f8151b13ff3fe64cde068d2 44d01a73ae43fd9407a7465076ca692070ba4df0d1d14e079dea312a41c56d3b 91e8ba85fc5fbc949ccc20c641ca680bff9a2e5a34078139d87c51da9dd816db f0d50d08bec5272b956bb1e6af0dfb76281f19aa46b64cac9cd26f8ac1526c28 957e272d55f429581cfdfe818ea79c43c3fbfc7a6887bf41321d5c2b49291907 0e2cb0e5f4fc9f432bdbdf38b000c754fb95abb38d71054bea13f0c8541e92c6 73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0 578b9020318eeff9c82d7a2d870ff9596e76b49036bca465add9a5c6d5055325 99089df3b98b98997fd079747afbbc92dcf90e312f3e7022d83db6ca4e6bb39e 5371bbbeabbd9c806ee90aacacab8a7a7ce845d15e45a5b76aa5666a6872c357 cc75839c03015173e538cb5ad19f785cd512a441d597f43dab89c785a111c874 2b0d29cefea470ec86b54a2b6012e3f4b495650bc1fec955330da9a891658c67: (20.278026227s)
	W1213 14:47:15.815876  171101 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 d20d51dfd3ea1c42e76f51de987dc0055b975553a6dfddd0e5bdd05d3801b1cd 8d9cbb53f37f0fdfc06cbf4901840cd15b164d449f8151b13ff3fe64cde068d2 44d01a73ae43fd9407a7465076ca692070ba4df0d1d14e079dea312a41c56d3b 91e8ba85fc5fbc949ccc20c641ca680bff9a2e5a34078139d87c51da9dd816db f0d50d08bec5272b956bb1e6af0dfb76281f19aa46b64cac9cd26f8ac1526c28 957e272d55f429581cfdfe818ea79c43c3fbfc7a6887bf41321d5c2b49291907 0e2cb0e5f4fc9f432bdbdf38b000c754fb95abb38d71054bea13f0c8541e92c6 73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0 578b9020318eeff9c82d7a2d870ff9596e76b49036bca465add9a5c6d5055325 99089df3b98b98997fd079747afbbc92dcf90e312f3e7022d83db6ca4e6bb39e 5371bbbeabbd9c806ee90aacacab8a7a7ce845d15e45a5b76aa5666a6872c357 cc75839c03015173e538cb5ad19f785cd512a441d597f43dab89c785a111c874 2b0d29cefea470ec86b54a2b6012e3f4b495650bc1fec955330da9a891658c67: Proce
ss exited with status 1
	stdout:
	d20d51dfd3ea1c42e76f51de987dc0055b975553a6dfddd0e5bdd05d3801b1cd
	8d9cbb53f37f0fdfc06cbf4901840cd15b164d449f8151b13ff3fe64cde068d2
	44d01a73ae43fd9407a7465076ca692070ba4df0d1d14e079dea312a41c56d3b
	91e8ba85fc5fbc949ccc20c641ca680bff9a2e5a34078139d87c51da9dd816db
	f0d50d08bec5272b956bb1e6af0dfb76281f19aa46b64cac9cd26f8ac1526c28
	957e272d55f429581cfdfe818ea79c43c3fbfc7a6887bf41321d5c2b49291907
	0e2cb0e5f4fc9f432bdbdf38b000c754fb95abb38d71054bea13f0c8541e92c6
	
	stderr:
	E1213 14:47:15.802642    3513 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0\": container with ID starting with 73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0 not found: ID does not exist" containerID="73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0"
	time="2025-12-13T14:47:15Z" level=fatal msg="stopping the container \"73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0\": rpc error: code = NotFound desc = could not find container \"73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0\": container with ID starting with 73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0 not found: ID does not exist"
	I1213 14:47:15.815955  171101 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:47:15.873046  171101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:47:15.885379  171101 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5651 Dec 13 14:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Dec 13 14:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec 13 14:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Dec 13 14:46 /etc/kubernetes/scheduler.conf
	
	I1213 14:47:15.885436  171101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 14:47:15.895785  171101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 14:47:15.906847  171101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 14:47:15.918190  171101 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:15.918258  171101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:47:15.930815  171101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 14:47:15.941371  171101 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:15.941457  171101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:47:15.953397  171101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:47:15.966229  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:16.032907  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:17.788387  171101 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.755432086s)
	I1213 14:47:17.788478  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:18.031620  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:18.095707  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:18.209824  171101 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:18.209946  171101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:17.576347  171817 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1213 14:47:16.490897  171994 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 14:47:16.490936  171994 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 14:47:16.490945  171994 cache.go:65] Caching tarball of preloaded images
	I1213 14:47:16.491052  171994 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 14:47:16.491066  171994 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 14:47:16.491202  171994 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/force-systemd-env-936726/config.json ...
	I1213 14:47:16.491227  171994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/force-systemd-env-936726/config.json: {Name:mk07fa1e48d4f5f92253610e0bdf6a8f4ee02fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:16.491388  171994 start.go:360] acquireMachinesLock for force-systemd-env-936726: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 14:47:18.710981  171101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:19.210293  171101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:19.232516  171101 api_server.go:72] duration metric: took 1.02271206s to wait for apiserver process to appear ...
	I1213 14:47:19.232543  171101 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:19.232567  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:23.657338  171817 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1213 14:47:23.273222  171187 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4 0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884 4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34 55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d 7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027 a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9 84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1 4d2a56532056260bdd93e4b14a86c36307ac41998452bb61141f06cc76ec9477 f83edb7f56ec7222cb74cca751c4ba220e1a3fcad81dce9e7e241660348b0493 8e9ebe98e0ac3acba58f256c082e0de73e7d9385cb6a2521cb98a062713ecdf4 82999fbb510eeda7012e606ff9f37bb5d429ce07985955e556746cc183dc17a9 d1375c0e9aba7cfa8773de45d54ec8a8d032d9f0666cc081d7ea4d625de4b3bc: (20.691232382s)
	W1213 14:47:23.273372  171187 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4 0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884 4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34 55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d 7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027 a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9 84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1 4d2a56532056260bdd93e4b14a86c36307ac41998452bb61141f06cc76ec9477 f83edb7f56ec7222cb74cca751c4ba220e1a3fcad81dce9e7e241660348b0493 8e9ebe98e0ac3acba58f256c082e0de73e7d9385cb6a2521cb98a062713ecdf4 82999fbb510eeda7012e606ff9f37bb5d429ce07985955e556746cc183dc17a9 d1375c0e9aba7cfa8773de45d54ec8a8d032d9f0666cc081d7ea4d625de4b3bc: Process exited with status 1
	stdout:
	63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4
	0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884
	4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34
	55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d
	7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027
	a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9
	
	stderr:
	E1213 14:47:23.267778    3637 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1\": container with ID starting with 84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1 not found: ID does not exist" containerID="84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1"
	time="2025-12-13T14:47:23Z" level=fatal msg="stopping the container \"84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1\": rpc error: code = NotFound desc = could not find container \"84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1\": container with ID starting with 84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1 not found: ID does not exist"
	I1213 14:47:23.273492  171187 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:47:23.326225  171187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:47:23.347586  171187 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 14:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5637 Dec 13 14:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1953 Dec 13 14:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5589 Dec 13 14:45 /etc/kubernetes/scheduler.conf
	
	I1213 14:47:23.347670  171187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 14:47:23.362373  171187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 14:47:23.374897  171187 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:23.374967  171187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:47:23.390044  171187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 14:47:23.404451  171187 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:23.404531  171187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:47:23.422988  171187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 14:47:23.436959  171187 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:23.437028  171187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:47:23.450157  171187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:47:23.461974  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:23.548458  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:24.361425  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:24.698585  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:24.783344  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:24.881131  171187 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:24.881252  171187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:25.382345  171187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:25.881313  171187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:25.906049  171187 api_server.go:72] duration metric: took 1.024934378s to wait for apiserver process to appear ...
	I1213 14:47:25.906099  171187 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:25.906124  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:27.845005  171994 start.go:364] duration metric: took 11.353549547s to acquireMachinesLock for "force-systemd-env-936726"
	I1213 14:47:27.845106  171994 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-936726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-936726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 14:47:27.845255  171994 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 14:47:24.232985  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:47:24.233114  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:27.644571  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 14:47:27.644609  171187 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 14:47:27.644629  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:27.766738  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 14:47:27.766770  171187 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 14:47:27.907159  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:27.914411  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:27.914449  171187 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:28.407216  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:28.412847  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:28.412881  171187 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:28.906481  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:28.912437  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I1213 14:47:28.922538  171187 api_server.go:141] control plane version: v1.34.2
	I1213 14:47:28.922573  171187 api_server.go:131] duration metric: took 3.016465547s to wait for apiserver health ...
	I1213 14:47:28.922585  171187 cni.go:84] Creating CNI manager for ""
	I1213 14:47:28.922594  171187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:47:28.926188  171187 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 14:47:28.927576  171187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 14:47:28.951810  171187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 14:47:28.979674  171187 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:47:28.985151  171187 system_pods.go:59] 6 kube-system pods found
	I1213 14:47:28.985215  171187 system_pods.go:61] "coredns-66bc5c9577-rtkhx" [5ba241f5-6e50-474a-a043-1120ec1bbfa2] Running
	I1213 14:47:28.985234  171187 system_pods.go:61] "etcd-pause-711635" [d50229c0-e156-423e-9ab1-187eb0f22486] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 14:47:28.985248  171187 system_pods.go:61] "kube-apiserver-pause-711635" [6ab0ad19-01a6-4f2b-9807-ec7ecf230b75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:28.985270  171187 system_pods.go:61] "kube-controller-manager-pause-711635" [e8378cef-1390-4fe0-a7b7-c1576fee1eab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:28.985276  171187 system_pods.go:61] "kube-proxy-ck5nd" [b82a9f3d-e529-4e43-bb38-6b5d2be9e874] Running
	I1213 14:47:28.985291  171187 system_pods.go:61] "kube-scheduler-pause-711635" [0d44fccb-3015-41d5-ab9e-fc852eac9712] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:28.985300  171187 system_pods.go:74] duration metric: took 5.604619ms to wait for pod list to return data ...
	I1213 14:47:28.985315  171187 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:47:28.988565  171187 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:47:28.988603  171187 node_conditions.go:123] node cpu capacity is 2
	I1213 14:47:28.988620  171187 node_conditions.go:105] duration metric: took 3.29612ms to run NodePressure ...
	I1213 14:47:28.988681  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:29.247652  171187 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1213 14:47:29.251676  171187 kubeadm.go:744] kubelet initialised
	I1213 14:47:29.251708  171187 kubeadm.go:745] duration metric: took 4.028391ms waiting for restarted kubelet to initialise ...
	I1213 14:47:29.251731  171187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 14:47:29.268540  171187 ops.go:34] apiserver oom_adj: -16
	I1213 14:47:29.268563  171187 kubeadm.go:602] duration metric: took 26.837232093s to restartPrimaryControlPlane
	I1213 14:47:29.268574  171187 kubeadm.go:403] duration metric: took 27.165945698s to StartCluster
	I1213 14:47:29.268594  171187 settings.go:142] acquiring lock: {Name:mk721202c5d0c56fb9fb8fa9c13a73c8448f716f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:29.268688  171187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:47:29.269595  171187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:29.269870  171187 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 14:47:29.269986  171187 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:47:29.270149  171187 config.go:182] Loaded profile config "pause-711635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:47:29.271719  171187 out.go:179] * Verifying Kubernetes components...
	I1213 14:47:29.271727  171187 out.go:179] * Enabled addons: 
	I1213 14:47:26.770554  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:47:26.774908  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.775510  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:26.775536  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.775801  171817 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/config.json ...
	I1213 14:47:26.776118  171817 machine.go:94] provisionDockerMachine start ...
	I1213 14:47:26.778727  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.779222  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:26.779247  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.779442  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:26.779647  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:26.779657  171817 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:47:26.888343  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 14:47:26.888373  171817 buildroot.go:166] provisioning hostname "stopped-upgrade-729395"
	I1213 14:47:26.891715  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.892149  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:26.892184  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.892369  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:26.892671  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:26.892689  171817 main.go:143] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-729395 && echo "stopped-upgrade-729395" | sudo tee /etc/hostname
	I1213 14:47:27.018014  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-729395
	
	I1213 14:47:27.021144  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.021562  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.021589  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.021735  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:27.021937  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:27.021951  171817 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-729395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-729395/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-729395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:47:27.139341  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:47:27.139377  171817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
	I1213 14:47:27.139414  171817 buildroot.go:174] setting up certificates
	I1213 14:47:27.139427  171817 provision.go:84] configureAuth start
	I1213 14:47:27.142589  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.142998  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.143028  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.145643  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.146089  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.146163  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.146360  171817 provision.go:143] copyHostCerts
	I1213 14:47:27.146423  171817 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem, removing ...
	I1213 14:47:27.146439  171817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 14:47:27.146495  171817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
	I1213 14:47:27.146581  171817 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem, removing ...
	I1213 14:47:27.146590  171817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 14:47:27.146611  171817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
	I1213 14:47:27.146669  171817 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem, removing ...
	I1213 14:47:27.146676  171817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 14:47:27.146696  171817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
	I1213 14:47:27.146741  171817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-729395 san=[127.0.0.1 192.168.39.154 localhost minikube stopped-upgrade-729395]
	I1213 14:47:27.196166  171817 provision.go:177] copyRemoteCerts
	I1213 14:47:27.196239  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:47:27.198887  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.199322  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.199355  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.199487  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:27.281474  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 14:47:27.305816  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 14:47:27.332293  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 14:47:27.356682  171817 provision.go:87] duration metric: took 217.23265ms to configureAuth
	I1213 14:47:27.356711  171817 buildroot.go:189] setting minikube options for container-runtime
	I1213 14:47:27.356933  171817 config.go:182] Loaded profile config "stopped-upgrade-729395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 14:47:27.359606  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.360014  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.360051  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.360247  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:27.360463  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:27.360483  171817 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 14:47:27.597199  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 14:47:27.597230  171817 machine.go:97] duration metric: took 821.093302ms to provisionDockerMachine
	I1213 14:47:27.597242  171817 start.go:293] postStartSetup for "stopped-upgrade-729395" (driver="kvm2")
	I1213 14:47:27.597253  171817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:47:27.597324  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:47:27.600199  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.600619  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.600651  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.600793  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:27.683437  171817 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:47:27.688885  171817 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 14:47:27.688917  171817 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
	I1213 14:47:27.688990  171817 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
	I1213 14:47:27.689113  171817 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> 1352342.pem in /etc/ssl/certs
	I1213 14:47:27.689248  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 14:47:27.703357  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 14:47:27.732717  171817 start.go:296] duration metric: took 135.436793ms for postStartSetup
	I1213 14:47:27.732764  171817 fix.go:56] duration metric: took 14.582511563s for fixHost
	I1213 14:47:27.735770  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.736300  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.736331  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.736528  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:27.736789  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:27.736801  171817 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 14:47:27.844789  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765637247.807140681
	
	I1213 14:47:27.844829  171817 fix.go:216] guest clock: 1765637247.807140681
	I1213 14:47:27.844839  171817 fix.go:229] Guest: 2025-12-13 14:47:27.807140681 +0000 UTC Remote: 2025-12-13 14:47:27.732768896 +0000 UTC m=+23.195388568 (delta=74.371785ms)
	I1213 14:47:27.844874  171817 fix.go:200] guest clock delta is within tolerance: 74.371785ms
	I1213 14:47:27.844888  171817 start.go:83] releasing machines lock for "stopped-upgrade-729395", held for 14.694665435s
	I1213 14:47:27.848166  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.848686  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.848729  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.849338  171817 ssh_runner.go:195] Run: cat /version.json
	I1213 14:47:27.849438  171817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:47:27.853206  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.853356  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.853672  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.853705  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.853869  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.853897  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.853898  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:27.854131  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	W1213 14:47:27.962095  171817 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1213 14:47:27.962196  171817 ssh_runner.go:195] Run: systemctl --version
	I1213 14:47:27.969240  171817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 14:47:28.127096  171817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:47:28.135899  171817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:47:28.135989  171817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:47:28.153417  171817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 14:47:28.153446  171817 start.go:496] detecting cgroup driver to use...
	I1213 14:47:28.153533  171817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 14:47:28.171887  171817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 14:47:28.188156  171817 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:47:28.188228  171817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:47:28.203323  171817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:47:28.217698  171817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:47:28.345675  171817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:47:28.492153  171817 docker.go:234] disabling docker service ...
	I1213 14:47:28.492247  171817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:47:28.510264  171817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:47:28.525997  171817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:47:28.675716  171817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:47:28.830784  171817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:47:28.844419  171817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:47:28.864168  171817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 14:47:28.864225  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.874198  171817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 14:47:28.874249  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.883781  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.893252  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.902623  171817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:47:28.912962  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.925186  171817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.948354  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.963065  171817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:47:28.974284  171817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 14:47:28.974356  171817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 14:47:28.991810  171817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:47:29.004458  171817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:47:29.152781  171817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 14:47:29.252669  171817 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 14:47:29.252758  171817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 14:47:29.258989  171817 start.go:564] Will wait 60s for crictl version
	I1213 14:47:29.259088  171817 ssh_runner.go:195] Run: which crictl
	I1213 14:47:29.263328  171817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 14:47:29.301423  171817 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 14:47:29.301516  171817 ssh_runner.go:195] Run: crio --version
	I1213 14:47:29.334437  171817 ssh_runner.go:195] Run: crio --version
	I1213 14:47:29.370001  171817 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1213 14:47:29.375867  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:29.377945  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:29.377973  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:29.378330  171817 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 14:47:29.383645  171817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 14:47:29.400646  171817 kubeadm.go:884] updating cluster {Name:stopped-upgrade-729395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:s
topped-upgrade-729395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:47:29.400772  171817 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1213 14:47:29.400840  171817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:47:29.463100  171817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1213 14:47:29.463178  171817 ssh_runner.go:195] Run: which lz4
	I1213 14:47:29.468693  171817 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 14:47:29.474533  171817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 14:47:29.474575  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1213 14:47:29.273099  171187 addons.go:530] duration metric: took 3.126612ms for enable addons: enabled=[]
	I1213 14:47:29.273124  171187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:47:29.545226  171187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:47:29.565348  171187 node_ready.go:35] waiting up to 6m0s for node "pause-711635" to be "Ready" ...
	I1213 14:47:29.568611  171187 node_ready.go:49] node "pause-711635" is "Ready"
	I1213 14:47:29.568643  171187 node_ready.go:38] duration metric: took 3.249885ms for node "pause-711635" to be "Ready" ...
	I1213 14:47:29.568660  171187 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:29.568714  171187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:29.588092  171187 api_server.go:72] duration metric: took 318.171091ms to wait for apiserver process to appear ...
	I1213 14:47:29.588120  171187 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:29.588142  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:29.593541  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I1213 14:47:29.594957  171187 api_server.go:141] control plane version: v1.34.2
	I1213 14:47:29.594979  171187 api_server.go:131] duration metric: took 6.852168ms to wait for apiserver health ...
	I1213 14:47:29.594988  171187 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:47:29.600055  171187 system_pods.go:59] 6 kube-system pods found
	I1213 14:47:29.600100  171187 system_pods.go:61] "coredns-66bc5c9577-rtkhx" [5ba241f5-6e50-474a-a043-1120ec1bbfa2] Running
	I1213 14:47:29.600115  171187 system_pods.go:61] "etcd-pause-711635" [d50229c0-e156-423e-9ab1-187eb0f22486] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 14:47:29.600125  171187 system_pods.go:61] "kube-apiserver-pause-711635" [6ab0ad19-01a6-4f2b-9807-ec7ecf230b75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:29.600138  171187 system_pods.go:61] "kube-controller-manager-pause-711635" [e8378cef-1390-4fe0-a7b7-c1576fee1eab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:29.600147  171187 system_pods.go:61] "kube-proxy-ck5nd" [b82a9f3d-e529-4e43-bb38-6b5d2be9e874] Running
	I1213 14:47:29.600153  171187 system_pods.go:61] "kube-scheduler-pause-711635" [0d44fccb-3015-41d5-ab9e-fc852eac9712] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:29.600166  171187 system_pods.go:74] duration metric: took 5.172302ms to wait for pod list to return data ...
	I1213 14:47:29.600176  171187 default_sa.go:34] waiting for default service account to be created ...
	I1213 14:47:29.606064  171187 default_sa.go:45] found service account: "default"
	I1213 14:47:29.606111  171187 default_sa.go:55] duration metric: took 5.925654ms for default service account to be created ...
	I1213 14:47:29.606123  171187 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 14:47:29.610971  171187 system_pods.go:86] 6 kube-system pods found
	I1213 14:47:29.611008  171187 system_pods.go:89] "coredns-66bc5c9577-rtkhx" [5ba241f5-6e50-474a-a043-1120ec1bbfa2] Running
	I1213 14:47:29.611023  171187 system_pods.go:89] "etcd-pause-711635" [d50229c0-e156-423e-9ab1-187eb0f22486] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 14:47:29.611037  171187 system_pods.go:89] "kube-apiserver-pause-711635" [6ab0ad19-01a6-4f2b-9807-ec7ecf230b75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:29.611052  171187 system_pods.go:89] "kube-controller-manager-pause-711635" [e8378cef-1390-4fe0-a7b7-c1576fee1eab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:29.611060  171187 system_pods.go:89] "kube-proxy-ck5nd" [b82a9f3d-e529-4e43-bb38-6b5d2be9e874] Running
	I1213 14:47:29.611087  171187 system_pods.go:89] "kube-scheduler-pause-711635" [0d44fccb-3015-41d5-ab9e-fc852eac9712] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:29.611103  171187 system_pods.go:126] duration metric: took 4.971487ms to wait for k8s-apps to be running ...
	I1213 14:47:29.611122  171187 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 14:47:29.611192  171187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 14:47:29.648581  171187 system_svc.go:56] duration metric: took 37.44806ms WaitForService to wait for kubelet
	I1213 14:47:29.648619  171187 kubeadm.go:587] duration metric: took 378.716135ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:47:29.648641  171187 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:47:29.652990  171187 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:47:29.653017  171187 node_conditions.go:123] node cpu capacity is 2
	I1213 14:47:29.653035  171187 node_conditions.go:105] duration metric: took 4.387602ms to run NodePressure ...
	I1213 14:47:29.653051  171187 start.go:242] waiting for startup goroutines ...
	I1213 14:47:29.653063  171187 start.go:247] waiting for cluster config update ...
	I1213 14:47:29.653097  171187 start.go:256] writing updated cluster config ...
	I1213 14:47:29.653496  171187 ssh_runner.go:195] Run: rm -f paused
	I1213 14:47:29.660969  171187 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 14:47:29.661751  171187 kapi.go:59] client config for pause-711635: &rest.Config{Host:"https://192.168.50.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/client.key", CAFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:47:29.665279  171187 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rtkhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:29.671684  171187 pod_ready.go:94] pod "coredns-66bc5c9577-rtkhx" is "Ready"
	I1213 14:47:29.671710  171187 pod_ready.go:86] duration metric: took 6.402456ms for pod "coredns-66bc5c9577-rtkhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:29.674313  171187 pod_ready.go:83] waiting for pod "etcd-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:27.847419  171994 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1213 14:47:27.847680  171994 start.go:159] libmachine.API.Create for "force-systemd-env-936726" (driver="kvm2")
	I1213 14:47:27.847722  171994 client.go:173] LocalClient.Create starting
	I1213 14:47:27.847825  171994 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem
	I1213 14:47:27.847868  171994 main.go:143] libmachine: Decoding PEM data...
	I1213 14:47:27.847895  171994 main.go:143] libmachine: Parsing certificate...
	I1213 14:47:27.847975  171994 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem
	I1213 14:47:27.848004  171994 main.go:143] libmachine: Decoding PEM data...
	I1213 14:47:27.848027  171994 main.go:143] libmachine: Parsing certificate...
	I1213 14:47:27.848558  171994 main.go:143] libmachine: creating domain...
	I1213 14:47:27.848576  171994 main.go:143] libmachine: creating network...
	I1213 14:47:27.850655  171994 main.go:143] libmachine: found existing default network
	I1213 14:47:27.850968  171994 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 14:47:27.852265  171994 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:55:29:24} reservation:<nil>}
	I1213 14:47:27.852928  171994 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:78:61:79} reservation:<nil>}
	I1213 14:47:27.853994  171994 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c51440}
	I1213 14:47:27.854146  171994 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-force-systemd-env-936726</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 14:47:27.862750  171994 main.go:143] libmachine: creating private network mk-force-systemd-env-936726 192.168.61.0/24...
	I1213 14:47:27.948619  171994 main.go:143] libmachine: private network mk-force-systemd-env-936726 192.168.61.0/24 created
	I1213 14:47:27.948951  171994 main.go:143] libmachine: <network>
	  <name>mk-force-systemd-env-936726</name>
	  <uuid>39ccad8f-9e79-4a06-be31-341b15d55603</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:65:30:e8'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 14:47:27.948983  171994 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726 ...
	I1213 14:47:27.949028  171994 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22122-131207/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso
	I1213 14:47:27.949046  171994 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:47:27.949161  171994 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22122-131207/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22122-131207/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso...
	I1213 14:47:28.245972  171994 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/id_rsa...
	I1213 14:47:28.266477  171994 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/force-systemd-env-936726.rawdisk...
	I1213 14:47:28.266518  171994 main.go:143] libmachine: Writing magic tar header
	I1213 14:47:28.266544  171994 main.go:143] libmachine: Writing SSH key tar header
	I1213 14:47:28.266642  171994 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726 ...
	I1213 14:47:28.266735  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726
	I1213 14:47:28.266772  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726 (perms=drwx------)
	I1213 14:47:28.266791  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube/machines
	I1213 14:47:28.266810  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube/machines (perms=drwxr-xr-x)
	I1213 14:47:28.266824  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:47:28.266834  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube (perms=drwxr-xr-x)
	I1213 14:47:28.266844  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207
	I1213 14:47:28.266862  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207 (perms=drwxrwxr-x)
	I1213 14:47:28.266881  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1213 14:47:28.266894  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 14:47:28.266906  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1213 14:47:28.266919  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 14:47:28.266930  171994 main.go:143] libmachine: checking permissions on dir: /home
	I1213 14:47:28.266940  171994 main.go:143] libmachine: skipping /home - not owner
	I1213 14:47:28.266945  171994 main.go:143] libmachine: defining domain...
	I1213 14:47:28.268351  171994 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>force-systemd-env-936726</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/force-systemd-env-936726.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-force-systemd-env-936726'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1213 14:47:28.273261  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:87:5c:14 in network default
	I1213 14:47:28.273937  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:28.273955  171994 main.go:143] libmachine: starting domain...
	I1213 14:47:28.273959  171994 main.go:143] libmachine: ensuring networks are active...
	I1213 14:47:28.274922  171994 main.go:143] libmachine: Ensuring network default is active
	I1213 14:47:28.275532  171994 main.go:143] libmachine: Ensuring network mk-force-systemd-env-936726 is active
	I1213 14:47:28.276444  171994 main.go:143] libmachine: getting domain XML...
	I1213 14:47:28.278041  171994 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>force-systemd-env-936726</name>
	  <uuid>2c97758d-da61-4b3e-b6e1-87b6b49a456d</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/force-systemd-env-936726.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f5:25:0e'/>
	      <source network='mk-force-systemd-env-936726'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:87:5c:14'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 14:47:29.849846  171994 main.go:143] libmachine: waiting for domain to start...
	I1213 14:47:29.851473  171994 main.go:143] libmachine: domain is now running
	I1213 14:47:29.851493  171994 main.go:143] libmachine: waiting for IP...
	I1213 14:47:29.852525  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:29.853404  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:29.853420  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:29.853814  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:29.853872  171994 retry.go:31] will retry after 309.812108ms: waiting for domain to come up
	I1213 14:47:30.165468  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:30.166343  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:30.166365  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:30.166734  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:30.166779  171994 retry.go:31] will retry after 373.272172ms: waiting for domain to come up
	I1213 14:47:30.541387  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:30.542174  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:30.542207  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:30.542569  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:30.542608  171994 retry.go:31] will retry after 450.473735ms: waiting for domain to come up
	I1213 14:47:30.994575  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:30.995576  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:30.995597  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:30.996017  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:30.996087  171994 retry.go:31] will retry after 479.757929ms: waiting for domain to come up
	I1213 14:47:29.233774  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:47:29.233866  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:30.954824  171817 crio.go:462] duration metric: took 1.486181592s to copy over tarball
	I1213 14:47:30.954936  171817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 14:47:33.561388  171817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.60640639s)
	I1213 14:47:33.561422  171817 crio.go:469] duration metric: took 2.606552797s to extract the tarball
	I1213 14:47:33.561434  171817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 14:47:33.599854  171817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:47:33.642662  171817 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 14:47:33.642688  171817 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:47:33.642696  171817 kubeadm.go:935] updating node { 192.168.39.154 8443 v1.32.0 crio true true} ...
	I1213 14:47:33.642816  171817 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=stopped-upgrade-729395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-729395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:47:33.642891  171817 ssh_runner.go:195] Run: crio config
	I1213 14:47:33.692944  171817 cni.go:84] Creating CNI manager for ""
	I1213 14:47:33.692976  171817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:47:33.693000  171817 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:47:33.693030  171817 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-729395 NodeName:stopped-upgrade-729395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:47:33.693205  171817 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-729395"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.154"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:47:33.693302  171817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1213 14:47:33.703199  171817 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:47:33.703284  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:47:33.712507  171817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1213 14:47:33.731659  171817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 14:47:33.749090  171817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 14:47:33.767611  171817 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I1213 14:47:33.771622  171817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 14:47:33.785117  171817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:47:33.914543  171817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:47:33.931768  171817 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395 for IP: 192.168.39.154
	I1213 14:47:33.931806  171817 certs.go:195] generating shared ca certs ...
	I1213 14:47:33.931826  171817 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:33.932045  171817 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
	I1213 14:47:33.932135  171817 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
	I1213 14:47:33.932151  171817 certs.go:257] generating profile certs ...
	I1213 14:47:33.932316  171817 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.key
	I1213 14:47:33.932405  171817 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/apiserver.key.bc702708
	I1213 14:47:33.932460  171817 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/proxy-client.key
	I1213 14:47:33.932613  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem (1338 bytes)
	W1213 14:47:33.932658  171817 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234_empty.pem, impossibly tiny 0 bytes
	I1213 14:47:33.932672  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:47:33.932712  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
	I1213 14:47:33.932746  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:47:33.932783  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
	I1213 14:47:33.932842  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 14:47:33.933697  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:47:33.970590  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:47:34.006466  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:47:34.038698  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 14:47:34.063273  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 14:47:34.088188  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 14:47:34.113059  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:47:34.137585  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:47:34.164254  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:47:34.191343  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem --> /usr/share/ca-certificates/135234.pem (1338 bytes)
	I1213 14:47:34.217495  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /usr/share/ca-certificates/1352342.pem (1708 bytes)
	I1213 14:47:34.242934  171817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:47:34.265989  171817 ssh_runner.go:195] Run: openssl version
	I1213 14:47:34.272872  171817 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1352342.pem
	I1213 14:47:34.285962  171817 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1352342.pem /etc/ssl/certs/1352342.pem
	I1213 14:47:34.299532  171817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1352342.pem
	I1213 14:47:34.305733  171817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:00 /usr/share/ca-certificates/1352342.pem
	I1213 14:47:34.305800  171817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1352342.pem
	I1213 14:47:34.314432  171817 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:47:34.325469  171817 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1352342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 14:47:34.335567  171817 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:34.345541  171817 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:47:34.355102  171817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:34.360042  171817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:34.360128  171817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:34.366853  171817 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:47:34.376931  171817 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 14:47:34.386707  171817 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135234.pem
	I1213 14:47:34.397970  171817 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135234.pem /etc/ssl/certs/135234.pem
	I1213 14:47:34.408124  171817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135234.pem
	I1213 14:47:34.413318  171817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:00 /usr/share/ca-certificates/135234.pem
	I1213 14:47:34.413407  171817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135234.pem
	I1213 14:47:34.419366  171817 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:47:34.429014  171817 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/135234.pem /etc/ssl/certs/51391683.0
	I1213 14:47:34.439825  171817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:47:34.445001  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:47:34.451629  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:47:34.457785  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:47:34.464217  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:47:34.470740  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:47:34.476837  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:47:34.484770  171817 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-729395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stop
ped-upgrade-729395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:47:34.484851  171817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 14:47:34.484934  171817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:47:34.529458  171817 cri.go:89] found id: ""
	I1213 14:47:34.529565  171817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:47:34.540523  171817 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:47:34.540551  171817 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:47:34.540613  171817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:47:34.551987  171817 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:34.552659  171817 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-729395" does not appear in /home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:47:34.552950  171817 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-131207/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-729395" cluster setting kubeconfig missing "stopped-upgrade-729395" context setting]
	I1213 14:47:34.553594  171817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1213 14:47:31.683264  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	W1213 14:47:34.180843  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	W1213 14:47:36.181363  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	I1213 14:47:31.478130  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:31.478997  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:31.479021  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:31.479430  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:31.479469  171994 retry.go:31] will retry after 479.426304ms: waiting for domain to come up
	I1213 14:47:31.960336  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:31.960987  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:31.961004  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:31.961384  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:31.961428  171994 retry.go:31] will retry after 914.002134ms: waiting for domain to come up
	I1213 14:47:32.877707  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:32.878460  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:32.878482  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:32.878804  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:32.878853  171994 retry.go:31] will retry after 899.751788ms: waiting for domain to come up
	I1213 14:47:33.780389  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:33.781036  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:33.781051  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:33.781431  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:33.781473  171994 retry.go:31] will retry after 1.04050293s: waiting for domain to come up
	I1213 14:47:34.823437  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:34.824225  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:34.824243  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:34.824658  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:34.824708  171994 retry.go:31] will retry after 1.142227745s: waiting for domain to come up
	I1213 14:47:35.968344  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:35.969004  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:35.969025  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:35.969388  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:35.969434  171994 retry.go:31] will retry after 1.861086546s: waiting for domain to come up
	I1213 14:47:34.234636  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:47:34.234683  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:34.631767  171817 kapi.go:59] client config for stopped-upgrade-729395: &rest.Config{Host:"https://192.168.39.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.key", CAFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:47:34.632441  171817 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 14:47:34.632463  171817 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 14:47:34.632470  171817 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 14:47:34.632477  171817 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 14:47:34.632483  171817 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 14:47:34.632963  171817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:47:34.648270  171817 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1213 14:47:34.648294  171817 kubeadm.go:1161] stopping kube-system containers ...
	I1213 14:47:34.648311  171817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 14:47:34.648368  171817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:47:34.691535  171817 cri.go:89] found id: ""
	I1213 14:47:34.691620  171817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:47:34.712801  171817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:47:34.722813  171817 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 14:47:34.722843  171817 kubeadm.go:158] found existing configuration files:
	
	I1213 14:47:34.722911  171817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 14:47:34.733046  171817 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 14:47:34.733137  171817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 14:47:34.742408  171817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 14:47:34.751561  171817 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 14:47:34.751631  171817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:47:34.761410  171817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 14:47:34.770729  171817 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 14:47:34.770792  171817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:47:34.780533  171817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 14:47:34.790822  171817 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 14:47:34.790900  171817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:47:34.802492  171817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:47:34.812645  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:34.873896  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:36.147624  171817 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.273680509s)
	I1213 14:47:36.147716  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:36.363812  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:36.438759  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:36.535297  171817 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:36.535390  171817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:37.036403  171817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:37.536248  171817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:37.552569  171817 api_server.go:72] duration metric: took 1.017282565s to wait for apiserver process to appear ...
	I1213 14:47:37.552609  171817 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:37.552640  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	W1213 14:47:38.680907  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	W1213 14:47:41.180477  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	I1213 14:47:37.832034  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:37.832730  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:37.832749  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:37.833160  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:37.833202  171994 retry.go:31] will retry after 2.789342071s: waiting for domain to come up
	I1213 14:47:40.625594  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:40.626450  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:40.626471  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:40.626894  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:40.626939  171994 retry.go:31] will retry after 2.567412233s: waiting for domain to come up
	I1213 14:47:40.486883  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 14:47:40.486925  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 14:47:40.486947  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:40.570023  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:40.570060  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:40.570097  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:40.584749  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:40.584799  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:41.053502  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:41.057960  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:41.057988  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:41.553260  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:41.559790  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:41.559822  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:42.053563  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:42.058580  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I1213 14:47:42.065810  171817 api_server.go:141] control plane version: v1.32.0
	I1213 14:47:42.065852  171817 api_server.go:131] duration metric: took 4.513234152s to wait for apiserver health ...
	I1213 14:47:42.065866  171817 cni.go:84] Creating CNI manager for ""
	I1213 14:47:42.065875  171817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:47:42.067630  171817 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 14:47:42.068643  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 14:47:42.079251  171817 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 14:47:42.097635  171817 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:47:42.101888  171817 system_pods.go:59] 5 kube-system pods found
	I1213 14:47:42.101929  171817 system_pods.go:61] "etcd-stopped-upgrade-729395" [3d0f5358-11ab-473e-828c-52505111c2bf] Pending
	I1213 14:47:42.101941  171817 system_pods.go:61] "kube-apiserver-stopped-upgrade-729395" [9ca106a5-161c-4eca-86ce-2082cb887e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:42.101951  171817 system_pods.go:61] "kube-controller-manager-stopped-upgrade-729395" [8ff945e7-34e1-4ffe-997e-709e3aa0e127] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:42.101968  171817 system_pods.go:61] "kube-scheduler-stopped-upgrade-729395" [5466ba27-4966-4ae1-8a0e-d29e4d90b269] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:42.101977  171817 system_pods.go:61] "storage-provisioner" [0a75f7ac-c601-41a1-9f7f-fdaa13e20289] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 14:47:42.101987  171817 system_pods.go:74] duration metric: took 4.328596ms to wait for pod list to return data ...
	I1213 14:47:42.101999  171817 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:47:42.104752  171817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:47:42.104778  171817 node_conditions.go:123] node cpu capacity is 2
	I1213 14:47:42.104794  171817 node_conditions.go:105] duration metric: took 2.789354ms to run NodePressure ...
	I1213 14:47:42.104859  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:42.361857  171817 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 14:47:42.373463  171817 ops.go:34] apiserver oom_adj: -16
	I1213 14:47:42.373497  171817 kubeadm.go:602] duration metric: took 7.832937672s to restartPrimaryControlPlane
	I1213 14:47:42.373514  171817 kubeadm.go:403] duration metric: took 7.888751347s to StartCluster
	I1213 14:47:42.373545  171817 settings.go:142] acquiring lock: {Name:mk721202c5d0c56fb9fb8fa9c13a73c8448f716f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:42.373652  171817 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:47:42.374800  171817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:42.375133  171817 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 14:47:42.375229  171817 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:47:42.375336  171817 addons.go:70] Setting storage-provisioner=true in profile "stopped-upgrade-729395"
	I1213 14:47:42.375367  171817 addons.go:239] Setting addon storage-provisioner=true in "stopped-upgrade-729395"
	W1213 14:47:42.375380  171817 addons.go:248] addon storage-provisioner should already be in state true
	I1213 14:47:42.375384  171817 config.go:182] Loaded profile config "stopped-upgrade-729395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 14:47:42.375406  171817 host.go:66] Checking if "stopped-upgrade-729395" exists ...
	I1213 14:47:42.375523  171817 addons.go:70] Setting default-storageclass=true in profile "stopped-upgrade-729395"
	I1213 14:47:42.375556  171817 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-729395"
	I1213 14:47:42.377140  171817 out.go:179] * Verifying Kubernetes components...
	I1213 14:47:42.377143  171817 out.go:179] * Creating mount /home/jenkins:/minikube-host ...
	I1213 14:47:42.378163  171817 kapi.go:59] client config for stopped-upgrade-729395: &rest.Config{Host:"https://192.168.39.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.key", CAFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:47:42.378431  171817 addons.go:239] Setting addon default-storageclass=true in "stopped-upgrade-729395"
	W1213 14:47:42.378447  171817 addons.go:248] addon default-storageclass should already be in state true
	I1213 14:47:42.378474  171817 host.go:66] Checking if "stopped-upgrade-729395" exists ...
	I1213 14:47:42.379190  171817 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 14:47:42.379229  171817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:47:42.379616  171817 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/.mount-process: {Name:mke15d6e1465b1121607bf237533c07207c1695d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:42.380153  171817 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 14:47:42.380180  171817 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 14:47:42.380315  171817 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:47:42.380333  171817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 14:47:42.383411  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:42.383672  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:42.383846  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:42.383874  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:42.384021  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:42.384207  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:42.384232  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:42.384415  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:42.589755  171817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:47:42.612726  171817 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:42.612808  171817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:42.652329  171817 api_server.go:72] duration metric: took 277.137224ms to wait for apiserver process to appear ...
	I1213 14:47:42.652361  171817 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:42.652380  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:42.661303  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I1213 14:47:42.662386  171817 api_server.go:141] control plane version: v1.32.0
	I1213 14:47:42.662426  171817 api_server.go:131] duration metric: took 10.057663ms to wait for apiserver health ...
	I1213 14:47:42.662440  171817 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:47:42.670979  171817 system_pods.go:59] 5 kube-system pods found
	I1213 14:47:42.671019  171817 system_pods.go:61] "etcd-stopped-upgrade-729395" [3d0f5358-11ab-473e-828c-52505111c2bf] Pending
	I1213 14:47:42.671032  171817 system_pods.go:61] "kube-apiserver-stopped-upgrade-729395" [9ca106a5-161c-4eca-86ce-2082cb887e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:42.671040  171817 system_pods.go:61] "kube-controller-manager-stopped-upgrade-729395" [8ff945e7-34e1-4ffe-997e-709e3aa0e127] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:42.671051  171817 system_pods.go:61] "kube-scheduler-stopped-upgrade-729395" [5466ba27-4966-4ae1-8a0e-d29e4d90b269] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:42.671059  171817 system_pods.go:61] "storage-provisioner" [0a75f7ac-c601-41a1-9f7f-fdaa13e20289] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 14:47:42.671068  171817 system_pods.go:74] duration metric: took 8.619669ms to wait for pod list to return data ...
	I1213 14:47:42.671108  171817 kubeadm.go:587] duration metric: took 295.926922ms to wait for: map[apiserver:true system_pods:true]
	I1213 14:47:42.671132  171817 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:47:42.675026  171817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:47:42.675061  171817 node_conditions.go:123] node cpu capacity is 2
	I1213 14:47:42.675098  171817 node_conditions.go:105] duration metric: took 3.959356ms to run NodePressure ...
	I1213 14:47:42.675125  171817 start.go:242] waiting for startup goroutines ...
	I1213 14:47:42.680027  171817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:47:42.737760  171817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:47:43.440325  171817 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 14:47:43.441318  171817 addons.go:530] duration metric: took 1.066098822s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 14:47:43.441370  171817 start.go:247] waiting for cluster config update ...
	I1213 14:47:43.441389  171817 start.go:256] writing updated cluster config ...
	I1213 14:47:43.441658  171817 ssh_runner.go:195] Run: rm -f paused
	I1213 14:47:43.505713  171817 start.go:625] kubectl: 1.34.3, cluster: 1.32.0 (minor skew: 2)
	I1213 14:47:43.507530  171817 out.go:203] 
	W1213 14:47:43.508795  171817 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.32.0.
	I1213 14:47:43.510120  171817 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1213 14:47:43.511726  171817 out.go:179] * Done! kubectl is now configured to use "stopped-upgrade-729395" cluster and "default" namespace by default
	I1213 14:47:39.235109  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:47:39.235173  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:39.474802  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": read tcp 192.168.72.1:36690->192.168.72.235:8443: read: connection reset by peer
	I1213 14:47:39.733234  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:39.733889  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:40.233265  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:40.233990  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:40.732712  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:40.733476  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:41.233142  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:41.233768  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:41.733301  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:41.734036  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:42.232720  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:42.233503  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:42.733292  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:42.734067  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:43.233466  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:43.234088  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	W1213 14:47:43.180999  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	I1213 14:47:43.680946  171187 pod_ready.go:94] pod "etcd-pause-711635" is "Ready"
	I1213 14:47:43.680974  171187 pod_ready.go:86] duration metric: took 14.006638109s for pod "etcd-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.683383  171187 pod_ready.go:83] waiting for pod "kube-apiserver-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.687712  171187 pod_ready.go:94] pod "kube-apiserver-pause-711635" is "Ready"
	I1213 14:47:43.687736  171187 pod_ready.go:86] duration metric: took 4.3285ms for pod "kube-apiserver-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.690452  171187 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.695396  171187 pod_ready.go:94] pod "kube-controller-manager-pause-711635" is "Ready"
	I1213 14:47:43.695420  171187 pod_ready.go:86] duration metric: took 4.945881ms for pod "kube-controller-manager-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.698432  171187 pod_ready.go:83] waiting for pod "kube-proxy-ck5nd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.877730  171187 pod_ready.go:94] pod "kube-proxy-ck5nd" is "Ready"
	I1213 14:47:43.877767  171187 pod_ready.go:86] duration metric: took 179.313783ms for pod "kube-proxy-ck5nd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:44.079145  171187 pod_ready.go:83] waiting for pod "kube-scheduler-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:44.478352  171187 pod_ready.go:94] pod "kube-scheduler-pause-711635" is "Ready"
	I1213 14:47:44.478390  171187 pod_ready.go:86] duration metric: took 399.210886ms for pod "kube-scheduler-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:44.478407  171187 pod_ready.go:40] duration metric: took 14.81740749s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 14:47:44.526649  171187 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 14:47:44.528277  171187 out.go:179] * Done! kubectl is now configured to use "pause-711635" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.192879240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765637265192857004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85069cbe-978d-4c2b-91b9-46e486111e94 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.193936447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c540cfc-5ab3-4156-9991-91f68a5a20f0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.194179950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c540cfc-5ab3-4156-9991-91f68a5a20f0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.194739755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765637245567414734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765637245558298071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a69
92ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765637243037599343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765637243016544954,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1765637241008807595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7,PodSandboxId:e3e6910a474ac
822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765637240008213511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99
c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765637222103704339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34,PodSandboxId:e3e6910a474ac822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765637221615216453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765637221664728547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765637221611886637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},
Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765637221549227863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765637221515039526,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c540cfc-5ab3-4156-9991-91f68a5a20f0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.237205330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e16d116-a143-4484-9dec-a6f5861d10a5 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.237363719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e16d116-a143-4484-9dec-a6f5861d10a5 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.238911014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26c0a4f6-d53a-4e58-a58d-4bb69e5e2789 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.239488755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765637265239465302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26c0a4f6-d53a-4e58-a58d-4bb69e5e2789 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.240507278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd171180-a220-4d0c-a2a1-00cbc4e9c502 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.240563845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd171180-a220-4d0c-a2a1-00cbc4e9c502 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.240829300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765637245567414734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765637245558298071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a69
92ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765637243037599343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765637243016544954,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1765637241008807595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7,PodSandboxId:e3e6910a474ac
822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765637240008213511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99
c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765637222103704339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34,PodSandboxId:e3e6910a474ac822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765637221615216453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765637221664728547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765637221611886637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},
Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765637221549227863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765637221515039526,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd171180-a220-4d0c-a2a1-00cbc4e9c502 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.283020375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=787bd865-e7ec-4613-804b-e2944e35259e name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.283113403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=787bd865-e7ec-4613-804b-e2944e35259e name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.284798045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09bef043-5068-445a-b3e2-92543193cba5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.285275663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765637265285243745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09bef043-5068-445a-b3e2-92543193cba5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.286651257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37c9562d-6551-48f2-bfcb-50106a668283 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.286744169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37c9562d-6551-48f2-bfcb-50106a668283 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.287185960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765637245567414734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765637245558298071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a69
92ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765637243037599343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765637243016544954,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1765637241008807595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7,PodSandboxId:e3e6910a474ac
822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765637240008213511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99
c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765637222103704339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34,PodSandboxId:e3e6910a474ac822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765637221615216453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765637221664728547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765637221611886637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},
Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765637221549227863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765637221515039526,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37c9562d-6551-48f2-bfcb-50106a668283 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.324870118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e2b06a0-6be1-4c93-9b84-89e889d73793 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.325028159Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e2b06a0-6be1-4c93-9b84-89e889d73793 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.326863064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e110a47-d486-4488-b47a-994e484ac926 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.327214795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765637265327192666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e110a47-d486-4488-b47a-994e484ac926 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.328527242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e62f2053-fd70-4bb1-9da9-ab0bcb484efe name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.328624451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e62f2053-fd70-4bb1-9da9-ab0bcb484efe name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:45 pause-711635 crio[2810]: time="2025-12-13 14:47:45.328944712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765637245567414734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765637245558298071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a69
92ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765637243037599343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765637243016544954,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1765637241008807595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7,PodSandboxId:e3e6910a474ac
822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765637240008213511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99
c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765637222103704339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34,PodSandboxId:e3e6910a474ac822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765637221615216453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765637221664728547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765637221611886637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},
Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765637221549227863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765637221515039526,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e62f2053-fd70-4bb1-9da9-ab0bcb484efe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	203150d600687       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   19 seconds ago      Running             kube-apiserver            2                   3b5d4eb3c145b       kube-apiserver-pause-711635            kube-system
	a6b973ab72f12       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago      Running             etcd                      2                   33cd26a16288a       etcd-pause-711635                      kube-system
	7a51d032e9184       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   22 seconds ago      Running             kube-controller-manager   2                   3b574cf8180a8       kube-controller-manager-pause-711635   kube-system
	dac16650b8d2a       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   22 seconds ago      Running             kube-scheduler            2                   0534857599f95       kube-scheduler-pause-711635            kube-system
	eeb71d3b8d004       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   24 seconds ago      Running             coredns                   2                   465940a2d8ab5       coredns-66bc5c9577-rtkhx               kube-system
	91cb0910783be       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   25 seconds ago      Running             kube-proxy                2                   e3e6910a474ac       kube-proxy-ck5nd                       kube-system
	63da8cf30c072       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   43 seconds ago      Exited              coredns                   1                   465940a2d8ab5       coredns-66bc5c9577-rtkhx               kube-system
	0485cc8093555       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   43 seconds ago      Exited              kube-scheduler            1                   0534857599f95       kube-scheduler-pause-711635            kube-system
	4e2b7455204e9       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   43 seconds ago      Exited              kube-proxy                1                   e3e6910a474ac       kube-proxy-ck5nd                       kube-system
	55df0f6b2939c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   43 seconds ago      Exited              kube-controller-manager   1                   3b574cf8180a8       kube-controller-manager-pause-711635   kube-system
	7d368c6c2e204       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   43 seconds ago      Exited              etcd                      1                   33cd26a16288a       etcd-pause-711635                      kube-system
	a6bdf39155d38       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   43 seconds ago      Exited              kube-apiserver            1                   3b5d4eb3c145b       kube-apiserver-pause-711635            kube-system
	
	
	==> coredns [63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4] <==
	
	
	==> coredns [eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35437 - 27228 "HINFO IN 3838349583231103733.8696173232521958495. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021937551s
	
	
	==> describe nodes <==
	Name:               pause-711635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-711635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=pause-711635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T14_45_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 14:45:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-711635
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 14:47:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 14:47:27 +0000   Sat, 13 Dec 2025 14:45:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 14:47:27 +0000   Sat, 13 Dec 2025 14:45:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 14:47:27 +0000   Sat, 13 Dec 2025 14:45:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 14:47:27 +0000   Sat, 13 Dec 2025 14:45:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.50
	  Hostname:    pause-711635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6ed8b0065374a05a3e2f89359d073e1
	  System UUID:                d6ed8b00-6537-4a05-a3e2-f89359d073e1
	  Boot ID:                    773740d6-f388-4ab3-a683-4e6deee155f8
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-rtkhx                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     119s
	  kube-system                 etcd-pause-711635                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m4s
	  kube-system                 kube-apiserver-pause-711635             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-pause-711635    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-ck5nd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-pause-711635             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 117s               kube-proxy       
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 40s                kube-proxy       
	  Normal  Starting                 2m5s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node pause-711635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node pause-711635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node pause-711635 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m4s               kubelet          Node pause-711635 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  2m4s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m                 node-controller  Node pause-711635 event: Registered Node pause-711635 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node pause-711635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node pause-711635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node pause-711635 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-711635 event: Registered Node pause-711635 in Controller
	
	
	==> dmesg <==
	[Dec13 14:45] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001507] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008562] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.171776] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088908] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.104242] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.146477] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.612482] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.074995] kauditd_printk_skb: 213 callbacks suppressed
	[Dec13 14:46] kauditd_printk_skb: 38 callbacks suppressed
	[Dec13 14:47] kauditd_printk_skb: 319 callbacks suppressed
	[  +0.527616] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.812557] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027] <==
	{"level":"warn","ts":"2025-12-13T14:47:04.396927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.418375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.431356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.443285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.461255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.476304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.610213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40572","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T14:47:06.077801Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T14:47:06.077858Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-711635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"]}
	{"level":"error","ts":"2025-12-13T14:47:06.077938Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T14:47:13.084713Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T14:47:13.084763Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:47:13.084779Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c0dcbd712fbd8799","current-leader-member-id":"c0dcbd712fbd8799"}
	{"level":"info","ts":"2025-12-13T14:47:13.084855Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-13T14:47:13.084864Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-13T14:47:13.086376Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.50:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T14:47:13.086475Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.50:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T14:47:13.086495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.50:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T14:47:13.086549Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T14:47:13.086567Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T14:47:13.086646Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:47:13.088562Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"error","ts":"2025-12-13T14:47:13.088663Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.50:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:47:13.088700Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2025-12-13T14:47:13.088717Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-711635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"]}
	
	
	==> etcd [a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb] <==
	{"level":"warn","ts":"2025-12-13T14:47:26.812908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.824681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.833171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.844139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.849081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.856617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.863267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.873863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.880702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.886380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.895974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.907156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.917619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.925957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.935147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.943505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.952124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.958295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.967489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.992753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.003896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.012155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.019992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.028933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.083504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44038","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:47:45 up 2 min,  0 users,  load average: 0.74, 0.28, 0.10
	Linux pause-711635 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 13 11:18:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f] <==
	I1213 14:47:27.804478       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 14:47:27.804534       1 policy_source.go:240] refreshing policies
	I1213 14:47:27.804918       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 14:47:27.805108       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 14:47:27.810278       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 14:47:27.814477       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 14:47:27.816641       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 14:47:27.816729       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 14:47:27.816848       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 14:47:27.818082       1 aggregator.go:171] initial CRD sync complete...
	I1213 14:47:27.818130       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 14:47:27.818156       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 14:47:27.818172       1 cache.go:39] Caches are synced for autoregister controller
	I1213 14:47:27.824692       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 14:47:27.892518       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 14:47:27.895010       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 14:47:27.998347       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 14:47:28.609163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 14:47:29.083021       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 14:47:29.122623       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 14:47:29.150615       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 14:47:29.157413       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 14:47:31.047964       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 14:47:31.152344       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 14:47:31.201510       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9] <==
	W1213 14:47:21.783233       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.784517       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.845426       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.868125       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.927643       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.937157       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.017552       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.018967       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.163201       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.173578       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.217092       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.238059       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.277022       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.323086       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.337512       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.404460       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.415757       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.517520       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.588900       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.600643       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.626271       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.634756       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.774402       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.948239       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:23.012174       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d] <==
	I1213 14:47:04.097139       1 serving.go:386] Generated self-signed cert in-memory
	I1213 14:47:04.829069       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 14:47:04.829102       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:47:04.834599       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 14:47:04.834728       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 14:47:04.835262       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 14:47:04.835352       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884] <==
	I1213 14:47:30.829984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 14:47:30.831781       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 14:47:30.832623       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 14:47:30.832771       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-711635"
	I1213 14:47:30.832855       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 14:47:30.835097       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 14:47:30.848590       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 14:47:30.848829       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 14:47:30.849088       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 14:47:30.849710       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 14:47:30.849897       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 14:47:30.850892       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 14:47:30.851413       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 14:47:30.851500       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 14:47:30.852085       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 14:47:30.857296       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 14:47:30.862192       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 14:47:30.862782       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 14:47:30.862879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 14:47:30.864797       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 14:47:30.879563       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 14:47:30.894952       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 14:47:30.898082       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 14:47:30.898119       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 14:47:31.157089       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34] <==
	I1213 14:47:03.381926       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 14:47:05.382100       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 14:47:05.384453       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.50"]
	E1213 14:47:05.386381       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 14:47:05.545763       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 14:47:05.545957       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 14:47:05.546035       1 server_linux.go:132] "Using iptables Proxier"
	I1213 14:47:05.562261       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 14:47:05.562585       1 server.go:527] "Version info" version="v1.34.2"
	I1213 14:47:05.563041       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:47:05.576389       1 config.go:200] "Starting service config controller"
	I1213 14:47:05.576413       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 14:47:05.580356       1 config.go:309] "Starting node config controller"
	I1213 14:47:05.580381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 14:47:05.580388       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 14:47:05.586362       1 config.go:106] "Starting endpoint slice config controller"
	I1213 14:47:05.586387       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 14:47:05.586540       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 14:47:05.588106       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 14:47:05.679130       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 14:47:05.688955       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 14:47:05.689880       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7] <==
	I1213 14:47:20.250773       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 14:47:20.250826       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.50"]
	E1213 14:47:20.250966       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 14:47:20.283246       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 14:47:20.283382       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 14:47:20.283509       1 server_linux.go:132] "Using iptables Proxier"
	I1213 14:47:20.292053       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 14:47:20.292242       1 server.go:527] "Version info" version="v1.34.2"
	I1213 14:47:20.292271       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:47:20.296300       1 config.go:200] "Starting service config controller"
	I1213 14:47:20.296398       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 14:47:20.296427       1 config.go:106] "Starting endpoint slice config controller"
	I1213 14:47:20.296445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 14:47:20.296465       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 14:47:20.296478       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 14:47:20.297989       1 config.go:309] "Starting node config controller"
	I1213 14:47:20.298015       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 14:47:20.298022       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 14:47:20.397053       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 14:47:20.397077       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 14:47:20.397059       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1213 14:47:23.218502       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": unexpected EOF"
	
	
	==> kube-scheduler [0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884] <==
	I1213 14:47:04.489395       1 serving.go:386] Generated self-signed cert in-memory
	I1213 14:47:05.712381       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 14:47:05.712412       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1213 14:47:05.712471       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1213 14:47:05.718727       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 14:47:05.718764       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 14:47:05.718801       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 14:47:05.718809       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 14:47:05.718820       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 14:47:05.718825       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 14:47:05.723201       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1213 14:47:05.723266       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1213 14:47:05.723375       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 14:47:05.723400       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 14:47:05.723457       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 14:47:05.723479       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 14:47:05.723483       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 14:47:05.723505       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10] <==
	E1213 14:47:25.416423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.50.50:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 14:47:25.416469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.50.50:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 14:47:25.416556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.50.50:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 14:47:25.416612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.50.50:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 14:47:25.417023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.50.50:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 14:47:27.750223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 14:47:27.752581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 14:47:27.752797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 14:47:27.752863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 14:47:27.752917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 14:47:27.752959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 14:47:27.753012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 14:47:27.753064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 14:47:27.753107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 14:47:27.753144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 14:47:27.753183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 14:47:27.753218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 14:47:27.753261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 14:47:27.753295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 14:47:27.753421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 14:47:27.753472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 14:47:27.753511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 14:47:27.753578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 14:47:27.760628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1213 14:47:30.613394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.087591    4204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-711635\" not found" node="pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.786991    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.854582    4204 apiserver.go:52] "Watching apiserver"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.856943    4204 kubelet_node_status.go:124] "Node was previously registered" node="pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.857032    4204 kubelet_node_status.go:78] "Successfully registered node" node="pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.857056    4204 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.858614    4204 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.907667    4204 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.918195    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-711635\" already exists" pod="kube-system/kube-apiserver-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.918542    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.930914    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-711635\" already exists" pod="kube-system/kube-controller-manager-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.930951    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.944200    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-711635\" already exists" pod="kube-system/kube-scheduler-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.944284    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.969187    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-711635\" already exists" pod="kube-system/etcd-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.987457    4204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b82a9f3d-e529-4e43-bb38-6b5d2be9e874-lib-modules\") pod \"kube-proxy-ck5nd\" (UID: \"b82a9f3d-e529-4e43-bb38-6b5d2be9e874\") " pod="kube-system/kube-proxy-ck5nd"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.987502    4204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b82a9f3d-e529-4e43-bb38-6b5d2be9e874-xtables-lock\") pod \"kube-proxy-ck5nd\" (UID: \"b82a9f3d-e529-4e43-bb38-6b5d2be9e874\") " pod="kube-system/kube-proxy-ck5nd"
	Dec 13 14:47:28 pause-711635 kubelet[4204]: I1213 14:47:28.087106    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-711635"
	Dec 13 14:47:28 pause-711635 kubelet[4204]: I1213 14:47:28.087993    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-711635"
	Dec 13 14:47:28 pause-711635 kubelet[4204]: E1213 14:47:28.101579    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-711635\" already exists" pod="kube-system/kube-apiserver-pause-711635"
	Dec 13 14:47:28 pause-711635 kubelet[4204]: E1213 14:47:28.103104    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-711635\" already exists" pod="kube-system/etcd-pause-711635"
	Dec 13 14:47:35 pause-711635 kubelet[4204]: E1213 14:47:35.041050    4204 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765637255040735169 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 14:47:35 pause-711635 kubelet[4204]: E1213 14:47:35.041071    4204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765637255040735169 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 14:47:45 pause-711635 kubelet[4204]: E1213 14:47:45.046874    4204 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765637265043822394 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 14:47:45 pause-711635 kubelet[4204]: E1213 14:47:45.047434    4204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765637265043822394 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-711635 -n pause-711635
helpers_test.go:270: (dbg) Run:  kubectl --context pause-711635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-711635 -n pause-711635
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-711635 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-711635 logs -n 25: (1.37210698s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-303609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                 │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:44 UTC │                     │
	│ start   │ -p NoKubernetes-303609 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                         │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:44 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p running-upgrade-352355 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ running-upgrade-352355   │ jenkins │ v1.35.0 │ 13 Dec 25 14:44 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p NoKubernetes-303609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ delete  │ -p NoKubernetes-303609                                                                                                                                      │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p NoKubernetes-303609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ delete  │ -p offline-crio-196030                                                                                                                                      │ offline-crio-196030      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p stopped-upgrade-729395 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ stopped-upgrade-729395   │ jenkins │ v1.35.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:47 UTC │
	│ start   │ -p running-upgrade-352355 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-352355   │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │                     │
	│ start   │ -p pause-711635 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-711635             │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:47 UTC │
	│ ssh     │ -p NoKubernetes-303609 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │                     │
	│ stop    │ -p NoKubernetes-303609                                                                                                                                      │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:46 UTC │
	│ start   │ -p NoKubernetes-303609 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:46 UTC │ 13 Dec 25 14:47 UTC │
	│ stop    │ stopped-upgrade-729395 stop                                                                                                                                 │ stopped-upgrade-729395   │ jenkins │ v1.35.0 │ 13 Dec 25 14:47 UTC │ 13 Dec 25 14:47 UTC │
	│ start   │ -p stopped-upgrade-729395 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-729395   │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │ 13 Dec 25 14:47 UTC │
	│ ssh     │ -p NoKubernetes-303609 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ delete  │ -p NoKubernetes-303609                                                                                                                                      │ NoKubernetes-303609      │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │ 13 Dec 25 14:47 UTC │
	│ start   │ -p force-systemd-env-936726 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                    │ force-systemd-env-936726 │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-729395 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-729395   │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ delete  │ -p stopped-upgrade-729395                                                                                                                                   │ stopped-upgrade-729395   │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │ 13 Dec 25 14:47 UTC │
	│ ssh     │ -p kubenet-590122 sudo cat /etc/nsswitch.conf                                                                                                               │ kubenet-590122           │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ ssh     │ -p kubenet-590122 sudo cat /etc/hosts                                                                                                                       │ kubenet-590122           │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ ssh     │ -p kubenet-590122 sudo cat /etc/resolv.conf                                                                                                                 │ kubenet-590122           │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ ssh     │ -p kubenet-590122 sudo crictl pods                                                                                                                          │ kubenet-590122           │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	│ ssh     │ -p kubenet-590122 sudo crictl ps --all                                                                                                                      │ kubenet-590122           │ jenkins │ v1.37.0 │ 13 Dec 25 14:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:47:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:47:16.435660  171994 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:47:16.435753  171994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:47:16.435757  171994 out.go:374] Setting ErrFile to fd 2...
	I1213 14:47:16.435761  171994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:47:16.435972  171994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:47:16.436450  171994 out.go:368] Setting JSON to false
	I1213 14:47:16.437375  171994 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8976,"bootTime":1765628260,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 14:47:16.437436  171994 start.go:143] virtualization: kvm guest
	I1213 14:47:16.439358  171994 out.go:179] * [force-systemd-env-936726] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 14:47:16.440476  171994 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:47:16.440510  171994 notify.go:221] Checking for updates...
	I1213 14:47:16.442650  171994 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:47:16.443914  171994 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:47:16.444905  171994 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:47:16.445910  171994 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 14:47:16.446961  171994 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1213 14:47:16.448749  171994 config.go:182] Loaded profile config "pause-711635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:47:16.448890  171994 config.go:182] Loaded profile config "running-upgrade-352355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 14:47:16.449010  171994 config.go:182] Loaded profile config "stopped-upgrade-729395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 14:47:16.449166  171994 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:47:16.485700  171994 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 14:47:16.486718  171994 start.go:309] selected driver: kvm2
	I1213 14:47:16.486732  171994 start.go:927] validating driver "kvm2" against <nil>
	I1213 14:47:16.486743  171994 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:47:16.487736  171994 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 14:47:16.488028  171994 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 14:47:16.488066  171994 cni.go:84] Creating CNI manager for ""
	I1213 14:47:16.488141  171994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:47:16.488155  171994 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 14:47:16.488206  171994 start.go:353] cluster config:
	{Name:force-systemd-env-936726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-936726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:47:16.488342  171994 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:47:16.489787  171994 out.go:179] * Starting "force-systemd-env-936726" primary control-plane node in "force-systemd-env-936726" cluster
	I1213 14:47:15.815795  171101 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d20d51dfd3ea1c42e76f51de987dc0055b975553a6dfddd0e5bdd05d3801b1cd 8d9cbb53f37f0fdfc06cbf4901840cd15b164d449f8151b13ff3fe64cde068d2 44d01a73ae43fd9407a7465076ca692070ba4df0d1d14e079dea312a41c56d3b 91e8ba85fc5fbc949ccc20c641ca680bff9a2e5a34078139d87c51da9dd816db f0d50d08bec5272b956bb1e6af0dfb76281f19aa46b64cac9cd26f8ac1526c28 957e272d55f429581cfdfe818ea79c43c3fbfc7a6887bf41321d5c2b49291907 0e2cb0e5f4fc9f432bdbdf38b000c754fb95abb38d71054bea13f0c8541e92c6 73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0 578b9020318eeff9c82d7a2d870ff9596e76b49036bca465add9a5c6d5055325 99089df3b98b98997fd079747afbbc92dcf90e312f3e7022d83db6ca4e6bb39e 5371bbbeabbd9c806ee90aacacab8a7a7ce845d15e45a5b76aa5666a6872c357 cc75839c03015173e538cb5ad19f785cd512a441d597f43dab89c785a111c874 2b0d29cefea470ec86b54a2b6012e3f4b495650bc1fec955330da9a891658c67: (20.278026227s)
	W1213 14:47:15.815876  171101 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 d20d51dfd3ea1c42e76f51de987dc0055b975553a6dfddd0e5bdd05d3801b1cd 8d9cbb53f37f0fdfc06cbf4901840cd15b164d449f8151b13ff3fe64cde068d2 44d01a73ae43fd9407a7465076ca692070ba4df0d1d14e079dea312a41c56d3b 91e8ba85fc5fbc949ccc20c641ca680bff9a2e5a34078139d87c51da9dd816db f0d50d08bec5272b956bb1e6af0dfb76281f19aa46b64cac9cd26f8ac1526c28 957e272d55f429581cfdfe818ea79c43c3fbfc7a6887bf41321d5c2b49291907 0e2cb0e5f4fc9f432bdbdf38b000c754fb95abb38d71054bea13f0c8541e92c6 73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0 578b9020318eeff9c82d7a2d870ff9596e76b49036bca465add9a5c6d5055325 99089df3b98b98997fd079747afbbc92dcf90e312f3e7022d83db6ca4e6bb39e 5371bbbeabbd9c806ee90aacacab8a7a7ce845d15e45a5b76aa5666a6872c357 cc75839c03015173e538cb5ad19f785cd512a441d597f43dab89c785a111c874 2b0d29cefea470ec86b54a2b6012e3f4b495650bc1fec955330da9a891658c67: Proce
ss exited with status 1
	stdout:
	d20d51dfd3ea1c42e76f51de987dc0055b975553a6dfddd0e5bdd05d3801b1cd
	8d9cbb53f37f0fdfc06cbf4901840cd15b164d449f8151b13ff3fe64cde068d2
	44d01a73ae43fd9407a7465076ca692070ba4df0d1d14e079dea312a41c56d3b
	91e8ba85fc5fbc949ccc20c641ca680bff9a2e5a34078139d87c51da9dd816db
	f0d50d08bec5272b956bb1e6af0dfb76281f19aa46b64cac9cd26f8ac1526c28
	957e272d55f429581cfdfe818ea79c43c3fbfc7a6887bf41321d5c2b49291907
	0e2cb0e5f4fc9f432bdbdf38b000c754fb95abb38d71054bea13f0c8541e92c6
	
	stderr:
	E1213 14:47:15.802642    3513 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0\": container with ID starting with 73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0 not found: ID does not exist" containerID="73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0"
	time="2025-12-13T14:47:15Z" level=fatal msg="stopping the container \"73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0\": rpc error: code = NotFound desc = could not find container \"73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0\": container with ID starting with 73fc58cd4cf1d4dd68010e09c3bead697b71ca90c0a64f3875129f4ef811eed0 not found: ID does not exist"
	I1213 14:47:15.815955  171101 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:47:15.873046  171101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:47:15.885379  171101 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5651 Dec 13 14:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Dec 13 14:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec 13 14:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Dec 13 14:46 /etc/kubernetes/scheduler.conf
	
	I1213 14:47:15.885436  171101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 14:47:15.895785  171101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 14:47:15.906847  171101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 14:47:15.918190  171101 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:15.918258  171101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:47:15.930815  171101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 14:47:15.941371  171101 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:15.941457  171101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:47:15.953397  171101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:47:15.966229  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:16.032907  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:17.788387  171101 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.755432086s)
	I1213 14:47:17.788478  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:18.031620  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:18.095707  171101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:18.209824  171101 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:18.209946  171101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:17.576347  171817 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1213 14:47:16.490897  171994 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 14:47:16.490936  171994 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 14:47:16.490945  171994 cache.go:65] Caching tarball of preloaded images
	I1213 14:47:16.491052  171994 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 14:47:16.491066  171994 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 14:47:16.491202  171994 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/force-systemd-env-936726/config.json ...
	I1213 14:47:16.491227  171994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/force-systemd-env-936726/config.json: {Name:mk07fa1e48d4f5f92253610e0bdf6a8f4ee02fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:16.491388  171994 start.go:360] acquireMachinesLock for force-systemd-env-936726: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 14:47:18.710981  171101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:19.210293  171101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:19.232516  171101 api_server.go:72] duration metric: took 1.02271206s to wait for apiserver process to appear ...
	I1213 14:47:19.232543  171101 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:19.232567  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:23.657338  171817 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1213 14:47:23.273222  171187 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4 0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884 4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34 55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d 7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027 a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9 84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1 4d2a56532056260bdd93e4b14a86c36307ac41998452bb61141f06cc76ec9477 f83edb7f56ec7222cb74cca751c4ba220e1a3fcad81dce9e7e241660348b0493 8e9ebe98e0ac3acba58f256c082e0de73e7d9385cb6a2521cb98a062713ecdf4 82999fbb510eeda7012e606ff9f37bb5d429ce07985955e556746cc183dc17a9 d1375c0e9aba7cfa8773de45d54ec8a8d032d9f0666cc081d7ea4d625de4b3bc: (20.691232382s)
	W1213 14:47:23.273372  171187 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4 0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884 4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34 55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d 7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027 a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9 84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1 4d2a56532056260bdd93e4b14a86c36307ac41998452bb61141f06cc76ec9477 f83edb7f56ec7222cb74cca751c4ba220e1a3fcad81dce9e7e241660348b0493 8e9ebe98e0ac3acba58f256c082e0de73e7d9385cb6a2521cb98a062713ecdf4 82999fbb510eeda7012e606ff9f37bb5d429ce07985955e556746cc183dc17a9 d1375c0e9aba7cfa8773de45d54ec8a8d032d9f0666cc081d7ea4d625de4b3bc: Process exited with status 1
	stdout:
	63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4
	0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884
	4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34
	55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d
	7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027
	a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9
	
	stderr:
	E1213 14:47:23.267778    3637 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1\": container with ID starting with 84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1 not found: ID does not exist" containerID="84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1"
	time="2025-12-13T14:47:23Z" level=fatal msg="stopping the container \"84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1\": rpc error: code = NotFound desc = could not find container \"84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1\": container with ID starting with 84830b7f3a05b649b115275544842e360763bf4e9331d64850437d38cb3d69e1 not found: ID does not exist"
	I1213 14:47:23.273492  171187 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:47:23.326225  171187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:47:23.347586  171187 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 14:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5637 Dec 13 14:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1953 Dec 13 14:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5589 Dec 13 14:45 /etc/kubernetes/scheduler.conf
	
	I1213 14:47:23.347670  171187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 14:47:23.362373  171187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 14:47:23.374897  171187 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:23.374967  171187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:47:23.390044  171187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 14:47:23.404451  171187 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:23.404531  171187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:47:23.422988  171187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 14:47:23.436959  171187 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:23.437028  171187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:47:23.450157  171187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:47:23.461974  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:23.548458  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:24.361425  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:24.698585  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:24.783344  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:24.881131  171187 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:24.881252  171187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:25.382345  171187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:25.881313  171187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:25.906049  171187 api_server.go:72] duration metric: took 1.024934378s to wait for apiserver process to appear ...
	I1213 14:47:25.906099  171187 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:25.906124  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:27.845005  171994 start.go:364] duration metric: took 11.353549547s to acquireMachinesLock for "force-systemd-env-936726"
	I1213 14:47:27.845106  171994 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-936726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-936726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 14:47:27.845255  171994 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 14:47:24.232985  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:47:24.233114  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:27.644571  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 14:47:27.644609  171187 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 14:47:27.644629  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:27.766738  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 14:47:27.766770  171187 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 14:47:27.907159  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:27.914411  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:27.914449  171187 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:28.407216  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:28.412847  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:28.412881  171187 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:28.906481  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:28.912437  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I1213 14:47:28.922538  171187 api_server.go:141] control plane version: v1.34.2
	I1213 14:47:28.922573  171187 api_server.go:131] duration metric: took 3.016465547s to wait for apiserver health ...
	I1213 14:47:28.922585  171187 cni.go:84] Creating CNI manager for ""
	I1213 14:47:28.922594  171187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:47:28.926188  171187 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 14:47:28.927576  171187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 14:47:28.951810  171187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 14:47:28.979674  171187 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:47:28.985151  171187 system_pods.go:59] 6 kube-system pods found
	I1213 14:47:28.985215  171187 system_pods.go:61] "coredns-66bc5c9577-rtkhx" [5ba241f5-6e50-474a-a043-1120ec1bbfa2] Running
	I1213 14:47:28.985234  171187 system_pods.go:61] "etcd-pause-711635" [d50229c0-e156-423e-9ab1-187eb0f22486] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 14:47:28.985248  171187 system_pods.go:61] "kube-apiserver-pause-711635" [6ab0ad19-01a6-4f2b-9807-ec7ecf230b75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:28.985270  171187 system_pods.go:61] "kube-controller-manager-pause-711635" [e8378cef-1390-4fe0-a7b7-c1576fee1eab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:28.985276  171187 system_pods.go:61] "kube-proxy-ck5nd" [b82a9f3d-e529-4e43-bb38-6b5d2be9e874] Running
	I1213 14:47:28.985291  171187 system_pods.go:61] "kube-scheduler-pause-711635" [0d44fccb-3015-41d5-ab9e-fc852eac9712] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:28.985300  171187 system_pods.go:74] duration metric: took 5.604619ms to wait for pod list to return data ...
	I1213 14:47:28.985315  171187 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:47:28.988565  171187 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:47:28.988603  171187 node_conditions.go:123] node cpu capacity is 2
	I1213 14:47:28.988620  171187 node_conditions.go:105] duration metric: took 3.29612ms to run NodePressure ...
	I1213 14:47:28.988681  171187 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:29.247652  171187 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1213 14:47:29.251676  171187 kubeadm.go:744] kubelet initialised
	I1213 14:47:29.251708  171187 kubeadm.go:745] duration metric: took 4.028391ms waiting for restarted kubelet to initialise ...
	I1213 14:47:29.251731  171187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 14:47:29.268540  171187 ops.go:34] apiserver oom_adj: -16
	I1213 14:47:29.268563  171187 kubeadm.go:602] duration metric: took 26.837232093s to restartPrimaryControlPlane
	I1213 14:47:29.268574  171187 kubeadm.go:403] duration metric: took 27.165945698s to StartCluster
	I1213 14:47:29.268594  171187 settings.go:142] acquiring lock: {Name:mk721202c5d0c56fb9fb8fa9c13a73c8448f716f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:29.268688  171187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:47:29.269595  171187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:29.269870  171187 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 14:47:29.269986  171187 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:47:29.270149  171187 config.go:182] Loaded profile config "pause-711635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:47:29.271719  171187 out.go:179] * Verifying Kubernetes components...
	I1213 14:47:29.271727  171187 out.go:179] * Enabled addons: 
	I1213 14:47:26.770554  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:47:26.774908  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.775510  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:26.775536  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.775801  171817 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/config.json ...
	I1213 14:47:26.776118  171817 machine.go:94] provisionDockerMachine start ...
	I1213 14:47:26.778727  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.779222  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:26.779247  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.779442  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:26.779647  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:26.779657  171817 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:47:26.888343  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 14:47:26.888373  171817 buildroot.go:166] provisioning hostname "stopped-upgrade-729395"
	I1213 14:47:26.891715  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.892149  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:26.892184  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:26.892369  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:26.892671  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:26.892689  171817 main.go:143] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-729395 && echo "stopped-upgrade-729395" | sudo tee /etc/hostname
	I1213 14:47:27.018014  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-729395
	
	I1213 14:47:27.021144  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.021562  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.021589  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.021735  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:27.021937  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:27.021951  171817 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-729395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-729395/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-729395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:47:27.139341  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:47:27.139377  171817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
	I1213 14:47:27.139414  171817 buildroot.go:174] setting up certificates
	I1213 14:47:27.139427  171817 provision.go:84] configureAuth start
	I1213 14:47:27.142589  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.142998  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.143028  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.145643  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.146089  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.146163  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.146360  171817 provision.go:143] copyHostCerts
	I1213 14:47:27.146423  171817 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem, removing ...
	I1213 14:47:27.146439  171817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem
	I1213 14:47:27.146495  171817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
	I1213 14:47:27.146581  171817 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem, removing ...
	I1213 14:47:27.146590  171817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem
	I1213 14:47:27.146611  171817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
	I1213 14:47:27.146669  171817 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem, removing ...
	I1213 14:47:27.146676  171817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem
	I1213 14:47:27.146696  171817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
	I1213 14:47:27.146741  171817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-729395 san=[127.0.0.1 192.168.39.154 localhost minikube stopped-upgrade-729395]
	I1213 14:47:27.196166  171817 provision.go:177] copyRemoteCerts
	I1213 14:47:27.196239  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:47:27.198887  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.199322  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.199355  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.199487  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:27.281474  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 14:47:27.305816  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 14:47:27.332293  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 14:47:27.356682  171817 provision.go:87] duration metric: took 217.23265ms to configureAuth
	I1213 14:47:27.356711  171817 buildroot.go:189] setting minikube options for container-runtime
	I1213 14:47:27.356933  171817 config.go:182] Loaded profile config "stopped-upgrade-729395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 14:47:27.359606  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.360014  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.360051  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.360247  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:27.360463  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:27.360483  171817 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 14:47:27.597199  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 14:47:27.597230  171817 machine.go:97] duration metric: took 821.093302ms to provisionDockerMachine
	I1213 14:47:27.597242  171817 start.go:293] postStartSetup for "stopped-upgrade-729395" (driver="kvm2")
	I1213 14:47:27.597253  171817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:47:27.597324  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:47:27.600199  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.600619  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.600651  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.600793  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:27.683437  171817 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:47:27.688885  171817 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 14:47:27.688917  171817 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
	I1213 14:47:27.688990  171817 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
	I1213 14:47:27.689113  171817 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem -> 1352342.pem in /etc/ssl/certs
	I1213 14:47:27.689248  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 14:47:27.703357  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 14:47:27.732717  171817 start.go:296] duration metric: took 135.436793ms for postStartSetup
	I1213 14:47:27.732764  171817 fix.go:56] duration metric: took 14.582511563s for fixHost
	I1213 14:47:27.735770  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.736300  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.736331  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.736528  171817 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:27.736789  171817 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1213 14:47:27.736801  171817 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 14:47:27.844789  171817 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765637247.807140681
	
	I1213 14:47:27.844829  171817 fix.go:216] guest clock: 1765637247.807140681
	I1213 14:47:27.844839  171817 fix.go:229] Guest: 2025-12-13 14:47:27.807140681 +0000 UTC Remote: 2025-12-13 14:47:27.732768896 +0000 UTC m=+23.195388568 (delta=74.371785ms)
	I1213 14:47:27.844874  171817 fix.go:200] guest clock delta is within tolerance: 74.371785ms
	I1213 14:47:27.844888  171817 start.go:83] releasing machines lock for "stopped-upgrade-729395", held for 14.694665435s
	I1213 14:47:27.848166  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.848686  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.848729  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.849338  171817 ssh_runner.go:195] Run: cat /version.json
	I1213 14:47:27.849438  171817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:47:27.853206  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.853356  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.853672  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.853705  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.853869  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:27.853897  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:27.853898  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:27.854131  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	W1213 14:47:27.962095  171817 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1213 14:47:27.962196  171817 ssh_runner.go:195] Run: systemctl --version
	I1213 14:47:27.969240  171817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 14:47:28.127096  171817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:47:28.135899  171817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:47:28.135989  171817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:47:28.153417  171817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 14:47:28.153446  171817 start.go:496] detecting cgroup driver to use...
	I1213 14:47:28.153533  171817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 14:47:28.171887  171817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 14:47:28.188156  171817 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:47:28.188228  171817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:47:28.203323  171817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:47:28.217698  171817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:47:28.345675  171817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:47:28.492153  171817 docker.go:234] disabling docker service ...
	I1213 14:47:28.492247  171817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:47:28.510264  171817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:47:28.525997  171817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:47:28.675716  171817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:47:28.830784  171817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:47:28.844419  171817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:47:28.864168  171817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 14:47:28.864225  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.874198  171817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 14:47:28.874249  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.883781  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.893252  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.902623  171817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:47:28.912962  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.925186  171817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.948354  171817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 14:47:28.963065  171817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:47:28.974284  171817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 14:47:28.974356  171817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 14:47:28.991810  171817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:47:29.004458  171817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:47:29.152781  171817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 14:47:29.252669  171817 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 14:47:29.252758  171817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 14:47:29.258989  171817 start.go:564] Will wait 60s for crictl version
	I1213 14:47:29.259088  171817 ssh_runner.go:195] Run: which crictl
	I1213 14:47:29.263328  171817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 14:47:29.301423  171817 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 14:47:29.301516  171817 ssh_runner.go:195] Run: crio --version
	I1213 14:47:29.334437  171817 ssh_runner.go:195] Run: crio --version
	I1213 14:47:29.370001  171817 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1213 14:47:29.375867  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:29.377945  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:29.377973  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:29.378330  171817 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 14:47:29.383645  171817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 14:47:29.400646  171817 kubeadm.go:884] updating cluster {Name:stopped-upgrade-729395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:s
topped-upgrade-729395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:47:29.400772  171817 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1213 14:47:29.400840  171817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:47:29.463100  171817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1213 14:47:29.463178  171817 ssh_runner.go:195] Run: which lz4
	I1213 14:47:29.468693  171817 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 14:47:29.474533  171817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 14:47:29.474575  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1213 14:47:29.273099  171187 addons.go:530] duration metric: took 3.126612ms for enable addons: enabled=[]
	I1213 14:47:29.273124  171187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:47:29.545226  171187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:47:29.565348  171187 node_ready.go:35] waiting up to 6m0s for node "pause-711635" to be "Ready" ...
	I1213 14:47:29.568611  171187 node_ready.go:49] node "pause-711635" is "Ready"
	I1213 14:47:29.568643  171187 node_ready.go:38] duration metric: took 3.249885ms for node "pause-711635" to be "Ready" ...
	I1213 14:47:29.568660  171187 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:29.568714  171187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:29.588092  171187 api_server.go:72] duration metric: took 318.171091ms to wait for apiserver process to appear ...
	I1213 14:47:29.588120  171187 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:29.588142  171187 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I1213 14:47:29.593541  171187 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I1213 14:47:29.594957  171187 api_server.go:141] control plane version: v1.34.2
	I1213 14:47:29.594979  171187 api_server.go:131] duration metric: took 6.852168ms to wait for apiserver health ...
	I1213 14:47:29.594988  171187 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:47:29.600055  171187 system_pods.go:59] 6 kube-system pods found
	I1213 14:47:29.600100  171187 system_pods.go:61] "coredns-66bc5c9577-rtkhx" [5ba241f5-6e50-474a-a043-1120ec1bbfa2] Running
	I1213 14:47:29.600115  171187 system_pods.go:61] "etcd-pause-711635" [d50229c0-e156-423e-9ab1-187eb0f22486] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 14:47:29.600125  171187 system_pods.go:61] "kube-apiserver-pause-711635" [6ab0ad19-01a6-4f2b-9807-ec7ecf230b75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:29.600138  171187 system_pods.go:61] "kube-controller-manager-pause-711635" [e8378cef-1390-4fe0-a7b7-c1576fee1eab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:29.600147  171187 system_pods.go:61] "kube-proxy-ck5nd" [b82a9f3d-e529-4e43-bb38-6b5d2be9e874] Running
	I1213 14:47:29.600153  171187 system_pods.go:61] "kube-scheduler-pause-711635" [0d44fccb-3015-41d5-ab9e-fc852eac9712] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:29.600166  171187 system_pods.go:74] duration metric: took 5.172302ms to wait for pod list to return data ...
	I1213 14:47:29.600176  171187 default_sa.go:34] waiting for default service account to be created ...
	I1213 14:47:29.606064  171187 default_sa.go:45] found service account: "default"
	I1213 14:47:29.606111  171187 default_sa.go:55] duration metric: took 5.925654ms for default service account to be created ...
	I1213 14:47:29.606123  171187 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 14:47:29.610971  171187 system_pods.go:86] 6 kube-system pods found
	I1213 14:47:29.611008  171187 system_pods.go:89] "coredns-66bc5c9577-rtkhx" [5ba241f5-6e50-474a-a043-1120ec1bbfa2] Running
	I1213 14:47:29.611023  171187 system_pods.go:89] "etcd-pause-711635" [d50229c0-e156-423e-9ab1-187eb0f22486] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 14:47:29.611037  171187 system_pods.go:89] "kube-apiserver-pause-711635" [6ab0ad19-01a6-4f2b-9807-ec7ecf230b75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:29.611052  171187 system_pods.go:89] "kube-controller-manager-pause-711635" [e8378cef-1390-4fe0-a7b7-c1576fee1eab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:29.611060  171187 system_pods.go:89] "kube-proxy-ck5nd" [b82a9f3d-e529-4e43-bb38-6b5d2be9e874] Running
	I1213 14:47:29.611087  171187 system_pods.go:89] "kube-scheduler-pause-711635" [0d44fccb-3015-41d5-ab9e-fc852eac9712] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:29.611103  171187 system_pods.go:126] duration metric: took 4.971487ms to wait for k8s-apps to be running ...
	I1213 14:47:29.611122  171187 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 14:47:29.611192  171187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 14:47:29.648581  171187 system_svc.go:56] duration metric: took 37.44806ms WaitForService to wait for kubelet
	I1213 14:47:29.648619  171187 kubeadm.go:587] duration metric: took 378.716135ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:47:29.648641  171187 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:47:29.652990  171187 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:47:29.653017  171187 node_conditions.go:123] node cpu capacity is 2
	I1213 14:47:29.653035  171187 node_conditions.go:105] duration metric: took 4.387602ms to run NodePressure ...
	I1213 14:47:29.653051  171187 start.go:242] waiting for startup goroutines ...
	I1213 14:47:29.653063  171187 start.go:247] waiting for cluster config update ...
	I1213 14:47:29.653097  171187 start.go:256] writing updated cluster config ...
	I1213 14:47:29.653496  171187 ssh_runner.go:195] Run: rm -f paused
	I1213 14:47:29.660969  171187 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 14:47:29.661751  171187 kapi.go:59] client config for pause-711635: &rest.Config{Host:"https://192.168.50.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/client.key", CAFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:47:29.665279  171187 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rtkhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:29.671684  171187 pod_ready.go:94] pod "coredns-66bc5c9577-rtkhx" is "Ready"
	I1213 14:47:29.671710  171187 pod_ready.go:86] duration metric: took 6.402456ms for pod "coredns-66bc5c9577-rtkhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:29.674313  171187 pod_ready.go:83] waiting for pod "etcd-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:27.847419  171994 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1213 14:47:27.847680  171994 start.go:159] libmachine.API.Create for "force-systemd-env-936726" (driver="kvm2")
	I1213 14:47:27.847722  171994 client.go:173] LocalClient.Create starting
	I1213 14:47:27.847825  171994 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem
	I1213 14:47:27.847868  171994 main.go:143] libmachine: Decoding PEM data...
	I1213 14:47:27.847895  171994 main.go:143] libmachine: Parsing certificate...
	I1213 14:47:27.847975  171994 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem
	I1213 14:47:27.848004  171994 main.go:143] libmachine: Decoding PEM data...
	I1213 14:47:27.848027  171994 main.go:143] libmachine: Parsing certificate...
	I1213 14:47:27.848558  171994 main.go:143] libmachine: creating domain...
	I1213 14:47:27.848576  171994 main.go:143] libmachine: creating network...
	I1213 14:47:27.850655  171994 main.go:143] libmachine: found existing default network
	I1213 14:47:27.850968  171994 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 14:47:27.852265  171994 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:55:29:24} reservation:<nil>}
	I1213 14:47:27.852928  171994 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:78:61:79} reservation:<nil>}
	I1213 14:47:27.853994  171994 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c51440}
	I1213 14:47:27.854146  171994 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-force-systemd-env-936726</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 14:47:27.862750  171994 main.go:143] libmachine: creating private network mk-force-systemd-env-936726 192.168.61.0/24...
	I1213 14:47:27.948619  171994 main.go:143] libmachine: private network mk-force-systemd-env-936726 192.168.61.0/24 created
	I1213 14:47:27.948951  171994 main.go:143] libmachine: <network>
	  <name>mk-force-systemd-env-936726</name>
	  <uuid>39ccad8f-9e79-4a06-be31-341b15d55603</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:65:30:e8'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 14:47:27.948983  171994 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726 ...
	I1213 14:47:27.949028  171994 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22122-131207/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso
	I1213 14:47:27.949046  171994 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:47:27.949161  171994 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22122-131207/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22122-131207/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso...
	I1213 14:47:28.245972  171994 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/id_rsa...
	I1213 14:47:28.266477  171994 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/force-systemd-env-936726.rawdisk...
	I1213 14:47:28.266518  171994 main.go:143] libmachine: Writing magic tar header
	I1213 14:47:28.266544  171994 main.go:143] libmachine: Writing SSH key tar header
	I1213 14:47:28.266642  171994 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726 ...
	I1213 14:47:28.266735  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726
	I1213 14:47:28.266772  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726 (perms=drwx------)
	I1213 14:47:28.266791  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube/machines
	I1213 14:47:28.266810  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube/machines (perms=drwxr-xr-x)
	I1213 14:47:28.266824  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:47:28.266834  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube (perms=drwxr-xr-x)
	I1213 14:47:28.266844  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207
	I1213 14:47:28.266862  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207 (perms=drwxrwxr-x)
	I1213 14:47:28.266881  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1213 14:47:28.266894  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 14:47:28.266906  171994 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1213 14:47:28.266919  171994 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 14:47:28.266930  171994 main.go:143] libmachine: checking permissions on dir: /home
	I1213 14:47:28.266940  171994 main.go:143] libmachine: skipping /home - not owner
	I1213 14:47:28.266945  171994 main.go:143] libmachine: defining domain...
	I1213 14:47:28.268351  171994 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>force-systemd-env-936726</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/force-systemd-env-936726.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-force-systemd-env-936726'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1213 14:47:28.273261  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:87:5c:14 in network default
	I1213 14:47:28.273937  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:28.273955  171994 main.go:143] libmachine: starting domain...
	I1213 14:47:28.273959  171994 main.go:143] libmachine: ensuring networks are active...
	I1213 14:47:28.274922  171994 main.go:143] libmachine: Ensuring network default is active
	I1213 14:47:28.275532  171994 main.go:143] libmachine: Ensuring network mk-force-systemd-env-936726 is active
	I1213 14:47:28.276444  171994 main.go:143] libmachine: getting domain XML...
	I1213 14:47:28.278041  171994 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>force-systemd-env-936726</name>
	  <uuid>2c97758d-da61-4b3e-b6e1-87b6b49a456d</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/force-systemd-env-936726/force-systemd-env-936726.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f5:25:0e'/>
	      <source network='mk-force-systemd-env-936726'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:87:5c:14'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 14:47:29.849846  171994 main.go:143] libmachine: waiting for domain to start...
	I1213 14:47:29.851473  171994 main.go:143] libmachine: domain is now running
	I1213 14:47:29.851493  171994 main.go:143] libmachine: waiting for IP...
	I1213 14:47:29.852525  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:29.853404  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:29.853420  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:29.853814  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:29.853872  171994 retry.go:31] will retry after 309.812108ms: waiting for domain to come up
	I1213 14:47:30.165468  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:30.166343  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:30.166365  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:30.166734  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:30.166779  171994 retry.go:31] will retry after 373.272172ms: waiting for domain to come up
	I1213 14:47:30.541387  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:30.542174  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:30.542207  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:30.542569  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:30.542608  171994 retry.go:31] will retry after 450.473735ms: waiting for domain to come up
	I1213 14:47:30.994575  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:30.995576  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:30.995597  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:30.996017  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:30.996087  171994 retry.go:31] will retry after 479.757929ms: waiting for domain to come up
	I1213 14:47:29.233774  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:47:29.233866  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:30.954824  171817 crio.go:462] duration metric: took 1.486181592s to copy over tarball
	I1213 14:47:30.954936  171817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 14:47:33.561388  171817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.60640639s)
	I1213 14:47:33.561422  171817 crio.go:469] duration metric: took 2.606552797s to extract the tarball
	I1213 14:47:33.561434  171817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 14:47:33.599854  171817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:47:33.642662  171817 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 14:47:33.642688  171817 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:47:33.642696  171817 kubeadm.go:935] updating node { 192.168.39.154 8443 v1.32.0 crio true true} ...
	I1213 14:47:33.642816  171817 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=stopped-upgrade-729395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-729395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:47:33.642891  171817 ssh_runner.go:195] Run: crio config
	I1213 14:47:33.692944  171817 cni.go:84] Creating CNI manager for ""
	I1213 14:47:33.692976  171817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:47:33.693000  171817 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:47:33.693030  171817 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-729395 NodeName:stopped-upgrade-729395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:47:33.693205  171817 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-729395"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.154"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:47:33.693302  171817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1213 14:47:33.703199  171817 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:47:33.703284  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:47:33.712507  171817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1213 14:47:33.731659  171817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 14:47:33.749090  171817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 14:47:33.767611  171817 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I1213 14:47:33.771622  171817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 14:47:33.785117  171817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:47:33.914543  171817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:47:33.931768  171817 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395 for IP: 192.168.39.154
	I1213 14:47:33.931806  171817 certs.go:195] generating shared ca certs ...
	I1213 14:47:33.931826  171817 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:33.932045  171817 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
	I1213 14:47:33.932135  171817 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
	I1213 14:47:33.932151  171817 certs.go:257] generating profile certs ...
	I1213 14:47:33.932316  171817 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.key
	I1213 14:47:33.932405  171817 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/apiserver.key.bc702708
	I1213 14:47:33.932460  171817 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/proxy-client.key
	I1213 14:47:33.932613  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem (1338 bytes)
	W1213 14:47:33.932658  171817 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234_empty.pem, impossibly tiny 0 bytes
	I1213 14:47:33.932672  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:47:33.932712  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
	I1213 14:47:33.932746  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:47:33.932783  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
	I1213 14:47:33.932842  171817 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem (1708 bytes)
	I1213 14:47:33.933697  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:47:33.970590  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:47:34.006466  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:47:34.038698  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 14:47:34.063273  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 14:47:34.088188  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 14:47:34.113059  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:47:34.137585  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:47:34.164254  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:47:34.191343  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/135234.pem --> /usr/share/ca-certificates/135234.pem (1338 bytes)
	I1213 14:47:34.217495  171817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/ssl/certs/1352342.pem --> /usr/share/ca-certificates/1352342.pem (1708 bytes)
	I1213 14:47:34.242934  171817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:47:34.265989  171817 ssh_runner.go:195] Run: openssl version
	I1213 14:47:34.272872  171817 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1352342.pem
	I1213 14:47:34.285962  171817 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1352342.pem /etc/ssl/certs/1352342.pem
	I1213 14:47:34.299532  171817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1352342.pem
	I1213 14:47:34.305733  171817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:00 /usr/share/ca-certificates/1352342.pem
	I1213 14:47:34.305800  171817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1352342.pem
	I1213 14:47:34.314432  171817 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:47:34.325469  171817 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1352342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 14:47:34.335567  171817 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:34.345541  171817 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:47:34.355102  171817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:34.360042  171817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:34.360128  171817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:47:34.366853  171817 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:47:34.376931  171817 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 14:47:34.386707  171817 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135234.pem
	I1213 14:47:34.397970  171817 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135234.pem /etc/ssl/certs/135234.pem
	I1213 14:47:34.408124  171817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135234.pem
	I1213 14:47:34.413318  171817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:00 /usr/share/ca-certificates/135234.pem
	I1213 14:47:34.413407  171817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135234.pem
	I1213 14:47:34.419366  171817 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:47:34.429014  171817 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/135234.pem /etc/ssl/certs/51391683.0
	I1213 14:47:34.439825  171817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:47:34.445001  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:47:34.451629  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:47:34.457785  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:47:34.464217  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:47:34.470740  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:47:34.476837  171817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:47:34.484770  171817 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-729395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stop
ped-upgrade-729395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:47:34.484851  171817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 14:47:34.484934  171817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:47:34.529458  171817 cri.go:89] found id: ""
	I1213 14:47:34.529565  171817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:47:34.540523  171817 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:47:34.540551  171817 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:47:34.540613  171817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:47:34.551987  171817 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:47:34.552659  171817 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-729395" does not appear in /home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:47:34.552950  171817 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-131207/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-729395" cluster setting kubeconfig missing "stopped-upgrade-729395" context setting]
	I1213 14:47:34.553594  171817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1213 14:47:31.683264  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	W1213 14:47:34.180843  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	W1213 14:47:36.181363  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	I1213 14:47:31.478130  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:31.478997  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:31.479021  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:31.479430  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:31.479469  171994 retry.go:31] will retry after 479.426304ms: waiting for domain to come up
	I1213 14:47:31.960336  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:31.960987  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:31.961004  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:31.961384  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:31.961428  171994 retry.go:31] will retry after 914.002134ms: waiting for domain to come up
	I1213 14:47:32.877707  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:32.878460  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:32.878482  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:32.878804  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:32.878853  171994 retry.go:31] will retry after 899.751788ms: waiting for domain to come up
	I1213 14:47:33.780389  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:33.781036  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:33.781051  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:33.781431  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:33.781473  171994 retry.go:31] will retry after 1.04050293s: waiting for domain to come up
	I1213 14:47:34.823437  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:34.824225  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:34.824243  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:34.824658  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:34.824708  171994 retry.go:31] will retry after 1.142227745s: waiting for domain to come up
	I1213 14:47:35.968344  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:35.969004  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:35.969025  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:35.969388  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:35.969434  171994 retry.go:31] will retry after 1.861086546s: waiting for domain to come up
	I1213 14:47:34.234636  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:47:34.234683  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:34.631767  171817 kapi.go:59] client config for stopped-upgrade-729395: &rest.Config{Host:"https://192.168.39.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.key", CAFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:47:34.632441  171817 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 14:47:34.632463  171817 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 14:47:34.632470  171817 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 14:47:34.632477  171817 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 14:47:34.632483  171817 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 14:47:34.632963  171817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:47:34.648270  171817 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1213 14:47:34.648294  171817 kubeadm.go:1161] stopping kube-system containers ...
	I1213 14:47:34.648311  171817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 14:47:34.648368  171817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:47:34.691535  171817 cri.go:89] found id: ""
	I1213 14:47:34.691620  171817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:47:34.712801  171817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:47:34.722813  171817 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 14:47:34.722843  171817 kubeadm.go:158] found existing configuration files:
	
	I1213 14:47:34.722911  171817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 14:47:34.733046  171817 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 14:47:34.733137  171817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 14:47:34.742408  171817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 14:47:34.751561  171817 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 14:47:34.751631  171817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:47:34.761410  171817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 14:47:34.770729  171817 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 14:47:34.770792  171817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:47:34.780533  171817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 14:47:34.790822  171817 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 14:47:34.790900  171817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:47:34.802492  171817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:47:34.812645  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:34.873896  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:36.147624  171817 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.273680509s)
	I1213 14:47:36.147716  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:36.363812  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:36.438759  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:36.535297  171817 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:36.535390  171817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:37.036403  171817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:37.536248  171817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:37.552569  171817 api_server.go:72] duration metric: took 1.017282565s to wait for apiserver process to appear ...
	I1213 14:47:37.552609  171817 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:37.552640  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	W1213 14:47:38.680907  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	W1213 14:47:41.180477  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	I1213 14:47:37.832034  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:37.832730  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:37.832749  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:37.833160  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:37.833202  171994 retry.go:31] will retry after 2.789342071s: waiting for domain to come up
	I1213 14:47:40.625594  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:40.626450  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:40.626471  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:40.626894  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:40.626939  171994 retry.go:31] will retry after 2.567412233s: waiting for domain to come up
	I1213 14:47:40.486883  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 14:47:40.486925  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 14:47:40.486947  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:40.570023  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:40.570060  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:40.570097  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:40.584749  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:40.584799  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:41.053502  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:41.057960  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:41.057988  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:41.553260  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:41.559790  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 14:47:41.559822  171817 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 14:47:42.053563  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:42.058580  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I1213 14:47:42.065810  171817 api_server.go:141] control plane version: v1.32.0
	I1213 14:47:42.065852  171817 api_server.go:131] duration metric: took 4.513234152s to wait for apiserver health ...
	I1213 14:47:42.065866  171817 cni.go:84] Creating CNI manager for ""
	I1213 14:47:42.065875  171817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 14:47:42.067630  171817 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 14:47:42.068643  171817 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 14:47:42.079251  171817 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 14:47:42.097635  171817 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:47:42.101888  171817 system_pods.go:59] 5 kube-system pods found
	I1213 14:47:42.101929  171817 system_pods.go:61] "etcd-stopped-upgrade-729395" [3d0f5358-11ab-473e-828c-52505111c2bf] Pending
	I1213 14:47:42.101941  171817 system_pods.go:61] "kube-apiserver-stopped-upgrade-729395" [9ca106a5-161c-4eca-86ce-2082cb887e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:42.101951  171817 system_pods.go:61] "kube-controller-manager-stopped-upgrade-729395" [8ff945e7-34e1-4ffe-997e-709e3aa0e127] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:42.101968  171817 system_pods.go:61] "kube-scheduler-stopped-upgrade-729395" [5466ba27-4966-4ae1-8a0e-d29e4d90b269] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:42.101977  171817 system_pods.go:61] "storage-provisioner" [0a75f7ac-c601-41a1-9f7f-fdaa13e20289] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 14:47:42.101987  171817 system_pods.go:74] duration metric: took 4.328596ms to wait for pod list to return data ...
	I1213 14:47:42.101999  171817 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:47:42.104752  171817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:47:42.104778  171817 node_conditions.go:123] node cpu capacity is 2
	I1213 14:47:42.104794  171817 node_conditions.go:105] duration metric: took 2.789354ms to run NodePressure ...
	I1213 14:47:42.104859  171817 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:47:42.361857  171817 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 14:47:42.373463  171817 ops.go:34] apiserver oom_adj: -16
	I1213 14:47:42.373497  171817 kubeadm.go:602] duration metric: took 7.832937672s to restartPrimaryControlPlane
	I1213 14:47:42.373514  171817 kubeadm.go:403] duration metric: took 7.888751347s to StartCluster
	I1213 14:47:42.373545  171817 settings.go:142] acquiring lock: {Name:mk721202c5d0c56fb9fb8fa9c13a73c8448f716f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:42.373652  171817 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:47:42.374800  171817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:42.375133  171817 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 14:47:42.375229  171817 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:47:42.375336  171817 addons.go:70] Setting storage-provisioner=true in profile "stopped-upgrade-729395"
	I1213 14:47:42.375367  171817 addons.go:239] Setting addon storage-provisioner=true in "stopped-upgrade-729395"
	W1213 14:47:42.375380  171817 addons.go:248] addon storage-provisioner should already be in state true
	I1213 14:47:42.375384  171817 config.go:182] Loaded profile config "stopped-upgrade-729395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 14:47:42.375406  171817 host.go:66] Checking if "stopped-upgrade-729395" exists ...
	I1213 14:47:42.375523  171817 addons.go:70] Setting default-storageclass=true in profile "stopped-upgrade-729395"
	I1213 14:47:42.375556  171817 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-729395"
	I1213 14:47:42.377140  171817 out.go:179] * Verifying Kubernetes components...
	I1213 14:47:42.377143  171817 out.go:179] * Creating mount /home/jenkins:/minikube-host ...
	I1213 14:47:42.378163  171817 kapi.go:59] client config for stopped-upgrade-729395: &rest.Config{Host:"https://192.168.39.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/client.key", CAFile:"/home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:47:42.378431  171817 addons.go:239] Setting addon default-storageclass=true in "stopped-upgrade-729395"
	W1213 14:47:42.378447  171817 addons.go:248] addon default-storageclass should already be in state true
	I1213 14:47:42.378474  171817 host.go:66] Checking if "stopped-upgrade-729395" exists ...
	I1213 14:47:42.379190  171817 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 14:47:42.379229  171817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:47:42.379616  171817 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/stopped-upgrade-729395/.mount-process: {Name:mke15d6e1465b1121607bf237533c07207c1695d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:47:42.380153  171817 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 14:47:42.380180  171817 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 14:47:42.380315  171817 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:47:42.380333  171817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 14:47:42.383411  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:42.383672  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:42.383846  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:42.383874  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:42.384021  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:42.384207  171817 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:4b:4e", ip: ""} in network mk-stopped-upgrade-729395: {Iface:virbr1 ExpiryTime:2025-12-13 15:47:24 +0000 UTC Type:0 Mac:52:54:00:fb:4b:4e Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:stopped-upgrade-729395 Clientid:01:52:54:00:fb:4b:4e}
	I1213 14:47:42.384232  171817 main.go:143] libmachine: domain stopped-upgrade-729395 has defined IP address 192.168.39.154 and MAC address 52:54:00:fb:4b:4e in network mk-stopped-upgrade-729395
	I1213 14:47:42.384415  171817 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/stopped-upgrade-729395/id_rsa Username:docker}
	I1213 14:47:42.589755  171817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:47:42.612726  171817 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:47:42.612808  171817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:47:42.652329  171817 api_server.go:72] duration metric: took 277.137224ms to wait for apiserver process to appear ...
	I1213 14:47:42.652361  171817 api_server.go:88] waiting for apiserver healthz status ...
	I1213 14:47:42.652380  171817 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I1213 14:47:42.661303  171817 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I1213 14:47:42.662386  171817 api_server.go:141] control plane version: v1.32.0
	I1213 14:47:42.662426  171817 api_server.go:131] duration metric: took 10.057663ms to wait for apiserver health ...
	I1213 14:47:42.662440  171817 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 14:47:42.670979  171817 system_pods.go:59] 5 kube-system pods found
	I1213 14:47:42.671019  171817 system_pods.go:61] "etcd-stopped-upgrade-729395" [3d0f5358-11ab-473e-828c-52505111c2bf] Pending
	I1213 14:47:42.671032  171817 system_pods.go:61] "kube-apiserver-stopped-upgrade-729395" [9ca106a5-161c-4eca-86ce-2082cb887e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 14:47:42.671040  171817 system_pods.go:61] "kube-controller-manager-stopped-upgrade-729395" [8ff945e7-34e1-4ffe-997e-709e3aa0e127] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 14:47:42.671051  171817 system_pods.go:61] "kube-scheduler-stopped-upgrade-729395" [5466ba27-4966-4ae1-8a0e-d29e4d90b269] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 14:47:42.671059  171817 system_pods.go:61] "storage-provisioner" [0a75f7ac-c601-41a1-9f7f-fdaa13e20289] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 14:47:42.671068  171817 system_pods.go:74] duration metric: took 8.619669ms to wait for pod list to return data ...
	I1213 14:47:42.671108  171817 kubeadm.go:587] duration metric: took 295.926922ms to wait for: map[apiserver:true system_pods:true]
	I1213 14:47:42.671132  171817 node_conditions.go:102] verifying NodePressure condition ...
	I1213 14:47:42.675026  171817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 14:47:42.675061  171817 node_conditions.go:123] node cpu capacity is 2
	I1213 14:47:42.675098  171817 node_conditions.go:105] duration metric: took 3.959356ms to run NodePressure ...
	I1213 14:47:42.675125  171817 start.go:242] waiting for startup goroutines ...
	I1213 14:47:42.680027  171817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:47:42.737760  171817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:47:43.440325  171817 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 14:47:43.441318  171817 addons.go:530] duration metric: took 1.066098822s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 14:47:43.441370  171817 start.go:247] waiting for cluster config update ...
	I1213 14:47:43.441389  171817 start.go:256] writing updated cluster config ...
	I1213 14:47:43.441658  171817 ssh_runner.go:195] Run: rm -f paused
	I1213 14:47:43.505713  171817 start.go:625] kubectl: 1.34.3, cluster: 1.32.0 (minor skew: 2)
	I1213 14:47:43.507530  171817 out.go:203] 
	W1213 14:47:43.508795  171817 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.32.0.
	I1213 14:47:43.510120  171817 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1213 14:47:43.511726  171817 out.go:179] * Done! kubectl is now configured to use "stopped-upgrade-729395" cluster and "default" namespace by default
	I1213 14:47:39.235109  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:47:39.235173  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:39.474802  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": read tcp 192.168.72.1:36690->192.168.72.235:8443: read: connection reset by peer
	I1213 14:47:39.733234  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:39.733889  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:40.233265  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:40.233990  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:40.732712  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:40.733476  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:41.233142  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:41.233768  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:41.733301  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:41.734036  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:42.232720  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:42.233503  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:42.733292  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:42.734067  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	I1213 14:47:43.233466  171101 api_server.go:253] Checking apiserver healthz at https://192.168.72.235:8443/healthz ...
	I1213 14:47:43.234088  171101 api_server.go:269] stopped: https://192.168.72.235:8443/healthz: Get "https://192.168.72.235:8443/healthz": dial tcp 192.168.72.235:8443: connect: connection refused
	W1213 14:47:43.180999  171187 pod_ready.go:104] pod "etcd-pause-711635" is not "Ready", error: <nil>
	I1213 14:47:43.680946  171187 pod_ready.go:94] pod "etcd-pause-711635" is "Ready"
	I1213 14:47:43.680974  171187 pod_ready.go:86] duration metric: took 14.006638109s for pod "etcd-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.683383  171187 pod_ready.go:83] waiting for pod "kube-apiserver-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.687712  171187 pod_ready.go:94] pod "kube-apiserver-pause-711635" is "Ready"
	I1213 14:47:43.687736  171187 pod_ready.go:86] duration metric: took 4.3285ms for pod "kube-apiserver-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.690452  171187 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.695396  171187 pod_ready.go:94] pod "kube-controller-manager-pause-711635" is "Ready"
	I1213 14:47:43.695420  171187 pod_ready.go:86] duration metric: took 4.945881ms for pod "kube-controller-manager-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.698432  171187 pod_ready.go:83] waiting for pod "kube-proxy-ck5nd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:43.877730  171187 pod_ready.go:94] pod "kube-proxy-ck5nd" is "Ready"
	I1213 14:47:43.877767  171187 pod_ready.go:86] duration metric: took 179.313783ms for pod "kube-proxy-ck5nd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:44.079145  171187 pod_ready.go:83] waiting for pod "kube-scheduler-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:44.478352  171187 pod_ready.go:94] pod "kube-scheduler-pause-711635" is "Ready"
	I1213 14:47:44.478390  171187 pod_ready.go:86] duration metric: took 399.210886ms for pod "kube-scheduler-pause-711635" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 14:47:44.478407  171187 pod_ready.go:40] duration metric: took 14.81740749s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 14:47:44.526649  171187 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 14:47:44.528277  171187 out.go:179] * Done! kubectl is now configured to use "pause-711635" cluster and "default" namespace by default
	I1213 14:47:43.195750  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:43.196420  171994 main.go:143] libmachine: no network interface addresses found for domain force-systemd-env-936726 (source=lease)
	I1213 14:47:43.196440  171994 main.go:143] libmachine: trying to list again with source=arp
	I1213 14:47:43.196882  171994 main.go:143] libmachine: unable to find current IP address of domain force-systemd-env-936726 in network mk-force-systemd-env-936726 (interfaces detected: [])
	I1213 14:47:43.196922  171994 retry.go:31] will retry after 2.92668831s: waiting for domain to come up
	I1213 14:47:46.126473  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:46.127342  171994 main.go:143] libmachine: domain force-systemd-env-936726 has current primary IP address 192.168.61.238 and MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:46.127364  171994 main.go:143] libmachine: found domain IP: 192.168.61.238
	I1213 14:47:46.127384  171994 main.go:143] libmachine: reserving static IP address...
	I1213 14:47:46.127974  171994 main.go:143] libmachine: unable to find host DHCP lease matching {name: "force-systemd-env-936726", mac: "52:54:00:f5:25:0e", ip: "192.168.61.238"} in network mk-force-systemd-env-936726
	I1213 14:47:46.370749  171994 main.go:143] libmachine: reserved static IP address 192.168.61.238 for domain force-systemd-env-936726
	I1213 14:47:46.370774  171994 main.go:143] libmachine: waiting for SSH...
	I1213 14:47:46.370781  171994 main.go:143] libmachine: Getting to WaitForSSH function...
	I1213 14:47:46.374847  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:46.375459  171994 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:25:0e", ip: ""} in network mk-force-systemd-env-936726: {Iface:virbr3 ExpiryTime:2025-12-13 15:47:43 +0000 UTC Type:0 Mac:52:54:00:f5:25:0e Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f5:25:0e}
	I1213 14:47:46.375499  171994 main.go:143] libmachine: domain force-systemd-env-936726 has defined IP address 192.168.61.238 and MAC address 52:54:00:f5:25:0e in network mk-force-systemd-env-936726
	I1213 14:47:46.375712  171994 main.go:143] libmachine: Using SSH client type: native
	I1213 14:47:46.376060  171994 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I1213 14:47:46.376091  171994 main.go:143] libmachine: About to run SSH command:
	exit 0
	
	
	==> CRI-O <==
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.179249624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765637267179190369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14b960f6-537c-44a0-8c3d-f6ded9ae220a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.180693081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80a6a50d-1e20-4898-b4a0-fb7ca5948f8d name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.180752688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80a6a50d-1e20-4898-b4a0-fb7ca5948f8d name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.181052471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765637245567414734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765637245558298071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a69
92ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765637243037599343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765637243016544954,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1765637241008807595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7,PodSandboxId:e3e6910a474ac
822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765637240008213511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99
c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765637222103704339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34,PodSandboxId:e3e6910a474ac822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765637221615216453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765637221664728547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765637221611886637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},
Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765637221549227863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765637221515039526,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80a6a50d-1e20-4898-b4a0-fb7ca5948f8d name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.224265667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=299e087c-061e-4f52-a970-38d0d7e6022e name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.224427255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=299e087c-061e-4f52-a970-38d0d7e6022e name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.226508181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8df0b3b1-55b9-4bc3-9cf9-c3b5e22785dc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.226840009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765637267226819539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8df0b3b1-55b9-4bc3-9cf9-c3b5e22785dc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.227658131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbcf9ed6-dacb-4320-9c68-152afe6e7437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.227730947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbcf9ed6-dacb-4320-9c68-152afe6e7437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.228079564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765637245567414734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765637245558298071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a69
92ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765637243037599343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765637243016544954,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1765637241008807595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7,PodSandboxId:e3e6910a474ac
822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765637240008213511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99
c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765637222103704339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34,PodSandboxId:e3e6910a474ac822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765637221615216453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765637221664728547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765637221611886637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},
Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765637221549227863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765637221515039526,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbcf9ed6-dacb-4320-9c68-152afe6e7437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.273396577Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6d6ec5c-8041-4373-8a0c-cbd404c34e9e name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.273523221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6d6ec5c-8041-4373-8a0c-cbd404c34e9e name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.275088987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b82831f-488e-4da3-aa22-124c9976541d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.275793344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765637267275760815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b82831f-488e-4da3-aa22-124c9976541d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.276745236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eedb4265-74f7-4393-bddb-ccf0de25c439 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.276877596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eedb4265-74f7-4393-bddb-ccf0de25c439 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.277893484Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765637245567414734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765637245558298071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a69
92ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765637243037599343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765637243016544954,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1765637241008807595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7,PodSandboxId:e3e6910a474ac
822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765637240008213511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99
c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765637222103704339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34,PodSandboxId:e3e6910a474ac822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765637221615216453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765637221664728547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765637221611886637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},
Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765637221549227863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765637221515039526,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eedb4265-74f7-4393-bddb-ccf0de25c439 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.321716560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b848a70d-ed8e-47de-a293-91af226ae238 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.321808125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b848a70d-ed8e-47de-a293-91af226ae238 name=/runtime.v1.RuntimeService/Version
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.323623003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59c503ae-eb2a-4eb0-96a5-c67a7eda70ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.323953828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765637267323931272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59c503ae-eb2a-4eb0-96a5-c67a7eda70ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.324669097Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=857ca586-dfce-48fb-846f-480fe0d1917e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.324737350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=857ca586-dfce-48fb-846f-480fe0d1917e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 14:47:47 pause-711635 crio[2810]: time="2025-12-13 14:47:47.324984528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765637245567414734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765637245558298071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a69
92ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765637243037599343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765637243016544954,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1765637241008807595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7,PodSandboxId:e3e6910a474ac
822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765637240008213511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4,PodSandboxId:465940a2d8ab58514a69478a48f0702a3a4f1d91027a99
c770dec9753e516512,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765637222103704339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rtkhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba241f5-6e50-474a-a043-1120ec1bbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34,PodSandboxId:e3e6910a474ac822de1489e76885e79fd170fccdd6a226440f2ebbe8a60c54d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765637221615216453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck5nd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b82a9f3d-e529-4e43-bb38-6b5d2be9e874,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884,PodSandboxId:0534857599f95c3f4fcc0dbba239aa383f17fbcfc46f08a67d9a7eb813af7cfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765637221664728547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbac580b1f00a7d646efcfa6ed45147d,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d,PodSandboxId:3b574cf8180a8564c27fa7452ac09f44af880d1c0afbb521276a0d768615c21a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765637221611886637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244b742b6b195b07fb3985b5cdfab02d,},
Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027,PodSandboxId:33cd26a16288a618116a7c49e0aff2a8806ce94dfcb45de8c042148cb29687ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765637221549227863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-711635,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 55a12ff0f7e055227f042f32afd9f161,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9,PodSandboxId:3b5d4eb3c145b559192cd59a84f6bda2b048da14bfedc0fe4be65a6a26579d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765637221515039526,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-711635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27a58fc8e33a4fb63ba834003db9bd35,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=857ca586-dfce-48fb-846f-480fe0d1917e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	203150d600687       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   21 seconds ago      Running             kube-apiserver            2                   3b5d4eb3c145b       kube-apiserver-pause-711635            kube-system
	a6b973ab72f12       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   21 seconds ago      Running             etcd                      2                   33cd26a16288a       etcd-pause-711635                      kube-system
	7a51d032e9184       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   24 seconds ago      Running             kube-controller-manager   2                   3b574cf8180a8       kube-controller-manager-pause-711635   kube-system
	dac16650b8d2a       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   24 seconds ago      Running             kube-scheduler            2                   0534857599f95       kube-scheduler-pause-711635            kube-system
	eeb71d3b8d004       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   26 seconds ago      Running             coredns                   2                   465940a2d8ab5       coredns-66bc5c9577-rtkhx               kube-system
	91cb0910783be       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   27 seconds ago      Running             kube-proxy                2                   e3e6910a474ac       kube-proxy-ck5nd                       kube-system
	63da8cf30c072       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   45 seconds ago      Exited              coredns                   1                   465940a2d8ab5       coredns-66bc5c9577-rtkhx               kube-system
	0485cc8093555       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   45 seconds ago      Exited              kube-scheduler            1                   0534857599f95       kube-scheduler-pause-711635            kube-system
	4e2b7455204e9       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   45 seconds ago      Exited              kube-proxy                1                   e3e6910a474ac       kube-proxy-ck5nd                       kube-system
	55df0f6b2939c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   45 seconds ago      Exited              kube-controller-manager   1                   3b574cf8180a8       kube-controller-manager-pause-711635   kube-system
	7d368c6c2e204       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   45 seconds ago      Exited              etcd                      1                   33cd26a16288a       etcd-pause-711635                      kube-system
	a6bdf39155d38       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   45 seconds ago      Exited              kube-apiserver            1                   3b5d4eb3c145b       kube-apiserver-pause-711635            kube-system
	
	
	==> coredns [63da8cf30c0728b367192b23c0ad09a95691fefcbe11ed6dfe4005d4552edcd4] <==
	
	
	==> coredns [eeb71d3b8d0041df6dc39d4f7c0565870f1b9f6a5f7c7369c5d95768bfe0f354] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35437 - 27228 "HINFO IN 3838349583231103733.8696173232521958495. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021937551s
	
	
	==> describe nodes <==
	Name:               pause-711635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-711635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=pause-711635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T14_45_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 14:45:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-711635
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 14:47:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 14:47:27 +0000   Sat, 13 Dec 2025 14:45:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 14:47:27 +0000   Sat, 13 Dec 2025 14:45:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 14:47:27 +0000   Sat, 13 Dec 2025 14:45:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 14:47:27 +0000   Sat, 13 Dec 2025 14:45:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.50
	  Hostname:    pause-711635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6ed8b0065374a05a3e2f89359d073e1
	  System UUID:                d6ed8b00-6537-4a05-a3e2-f89359d073e1
	  Boot ID:                    773740d6-f388-4ab3-a683-4e6deee155f8
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-rtkhx                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m1s
	  kube-system                 etcd-pause-711635                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m6s
	  kube-system                 kube-apiserver-pause-711635             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-pause-711635    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-ck5nd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-pause-711635             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 119s               kube-proxy       
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 42s                kube-proxy       
	  Normal  Starting                 2m7s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s               kubelet          Node pause-711635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s               kubelet          Node pause-711635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s               kubelet          Node pause-711635 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m6s               kubelet          Node pause-711635 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  2m6s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m2s               node-controller  Node pause-711635 event: Registered Node pause-711635 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-711635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-711635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-711635 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-711635 event: Registered Node pause-711635 in Controller
	
	
	==> dmesg <==
	[Dec13 14:45] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001507] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008562] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.171776] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088908] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.104242] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.146477] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.612482] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.074995] kauditd_printk_skb: 213 callbacks suppressed
	[Dec13 14:46] kauditd_printk_skb: 38 callbacks suppressed
	[Dec13 14:47] kauditd_printk_skb: 319 callbacks suppressed
	[  +0.527616] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.812557] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [7d368c6c2e20483aa5b61f03e53d3ad11acd476917ff5089ec8350ddabf87027] <==
	{"level":"warn","ts":"2025-12-13T14:47:04.396927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.418375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.431356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.443285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.461255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.476304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:04.610213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40572","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T14:47:06.077801Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T14:47:06.077858Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-711635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"]}
	{"level":"error","ts":"2025-12-13T14:47:06.077938Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T14:47:13.084713Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T14:47:13.084763Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:47:13.084779Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c0dcbd712fbd8799","current-leader-member-id":"c0dcbd712fbd8799"}
	{"level":"info","ts":"2025-12-13T14:47:13.084855Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-13T14:47:13.084864Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-13T14:47:13.086376Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.50:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T14:47:13.086475Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.50:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T14:47:13.086495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.50:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T14:47:13.086549Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T14:47:13.086567Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T14:47:13.086646Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:47:13.088562Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"error","ts":"2025-12-13T14:47:13.088663Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.50:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:47:13.088700Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2025-12-13T14:47:13.088717Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-711635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"]}
	
	
	==> etcd [a6b973ab72f12b27712b779a75a8502c3905842376c161796ccceed4a4c168fb] <==
	{"level":"warn","ts":"2025-12-13T14:47:26.812908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.824681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.833171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.844139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.849081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.856617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.863267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.873863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.880702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.886380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.895974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.907156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.917619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.925957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.935147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.943505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.952124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.958295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.967489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:26.992753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.003896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.012155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.019992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.028933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:47:27.083504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44038","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:47:47 up 2 min,  0 users,  load average: 0.74, 0.28, 0.10
	Linux pause-711635 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 13 11:18:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [203150d60068702447319418b794e907cb9eb775fe0083139f0124f2ec26cd6f] <==
	I1213 14:47:27.804478       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 14:47:27.804534       1 policy_source.go:240] refreshing policies
	I1213 14:47:27.804918       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 14:47:27.805108       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 14:47:27.810278       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 14:47:27.814477       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 14:47:27.816641       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 14:47:27.816729       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 14:47:27.816848       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 14:47:27.818082       1 aggregator.go:171] initial CRD sync complete...
	I1213 14:47:27.818130       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 14:47:27.818156       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 14:47:27.818172       1 cache.go:39] Caches are synced for autoregister controller
	I1213 14:47:27.824692       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 14:47:27.892518       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 14:47:27.895010       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 14:47:27.998347       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 14:47:28.609163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 14:47:29.083021       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 14:47:29.122623       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 14:47:29.150615       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 14:47:29.157413       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 14:47:31.047964       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 14:47:31.152344       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 14:47:31.201510       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [a6bdf39155d38fe6ad628304312983b81c0e4d68980f0da049cbf021a87a05b9] <==
	W1213 14:47:21.783233       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.784517       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.845426       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.868125       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.927643       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:21.937157       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.017552       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.018967       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.163201       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.173578       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.217092       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.238059       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.277022       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.323086       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.337512       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.404460       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.415757       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.517520       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.588900       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.600643       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.626271       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.634756       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.774402       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:22.948239       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:47:23.012174       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [55df0f6b2939ce926865c3d822f25235c14de29a8ea6a60cd697e55cdf027d9d] <==
	I1213 14:47:04.097139       1 serving.go:386] Generated self-signed cert in-memory
	I1213 14:47:04.829069       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 14:47:04.829102       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:47:04.834599       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 14:47:04.834728       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 14:47:04.835262       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 14:47:04.835352       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [7a51d032e9184587d0f79265e51ed8e11030758e36d03690b1e5acdec2ad5884] <==
	I1213 14:47:30.829984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 14:47:30.831781       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 14:47:30.832623       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 14:47:30.832771       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-711635"
	I1213 14:47:30.832855       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 14:47:30.835097       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 14:47:30.848590       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 14:47:30.848829       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 14:47:30.849088       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 14:47:30.849710       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 14:47:30.849897       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 14:47:30.850892       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 14:47:30.851413       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 14:47:30.851500       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 14:47:30.852085       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 14:47:30.857296       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 14:47:30.862192       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 14:47:30.862782       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 14:47:30.862879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 14:47:30.864797       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 14:47:30.879563       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 14:47:30.894952       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 14:47:30.898082       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 14:47:30.898119       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 14:47:31.157089       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [4e2b7455204e91c5826b2ce34e52f4243b69ac1bbcaff1a3df0ec71d0dd1af34] <==
	I1213 14:47:03.381926       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 14:47:05.382100       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 14:47:05.384453       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.50"]
	E1213 14:47:05.386381       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 14:47:05.545763       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 14:47:05.545957       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 14:47:05.546035       1 server_linux.go:132] "Using iptables Proxier"
	I1213 14:47:05.562261       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 14:47:05.562585       1 server.go:527] "Version info" version="v1.34.2"
	I1213 14:47:05.563041       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:47:05.576389       1 config.go:200] "Starting service config controller"
	I1213 14:47:05.576413       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 14:47:05.580356       1 config.go:309] "Starting node config controller"
	I1213 14:47:05.580381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 14:47:05.580388       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 14:47:05.586362       1 config.go:106] "Starting endpoint slice config controller"
	I1213 14:47:05.586387       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 14:47:05.586540       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 14:47:05.588106       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 14:47:05.679130       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 14:47:05.688955       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 14:47:05.689880       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [91cb0910783bea73269ae7ba3630a37c3379547d9c3f65a962c968cea9f14cd7] <==
	I1213 14:47:20.250773       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 14:47:20.250826       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.50"]
	E1213 14:47:20.250966       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 14:47:20.283246       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 14:47:20.283382       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 14:47:20.283509       1 server_linux.go:132] "Using iptables Proxier"
	I1213 14:47:20.292053       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 14:47:20.292242       1 server.go:527] "Version info" version="v1.34.2"
	I1213 14:47:20.292271       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:47:20.296300       1 config.go:200] "Starting service config controller"
	I1213 14:47:20.296398       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 14:47:20.296427       1 config.go:106] "Starting endpoint slice config controller"
	I1213 14:47:20.296445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 14:47:20.296465       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 14:47:20.296478       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 14:47:20.297989       1 config.go:309] "Starting node config controller"
	I1213 14:47:20.298015       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 14:47:20.298022       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 14:47:20.397053       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 14:47:20.397077       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 14:47:20.397059       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1213 14:47:23.218502       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": unexpected EOF"
	
	
	==> kube-scheduler [0485cc8093555115bf029d30a24bed9c1eb5e14ad63d4669822c93895d864884] <==
	I1213 14:47:04.489395       1 serving.go:386] Generated self-signed cert in-memory
	I1213 14:47:05.712381       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 14:47:05.712412       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1213 14:47:05.712471       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1213 14:47:05.718727       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 14:47:05.718764       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 14:47:05.718801       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 14:47:05.718809       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 14:47:05.718820       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 14:47:05.718825       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 14:47:05.723201       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1213 14:47:05.723266       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1213 14:47:05.723375       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 14:47:05.723400       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 14:47:05.723457       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 14:47:05.723479       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 14:47:05.723483       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 14:47:05.723505       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dac16650b8d2aca7e084adc8d169cd7d123d4602922b5efc45d93f2003ddca10] <==
	E1213 14:47:25.416423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.50.50:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 14:47:25.416469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.50.50:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 14:47:25.416556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.50.50:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 14:47:25.416612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.50.50:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 14:47:25.417023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.50.50:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.50:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 14:47:27.750223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 14:47:27.752581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 14:47:27.752797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 14:47:27.752863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 14:47:27.752917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 14:47:27.752959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 14:47:27.753012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 14:47:27.753064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 14:47:27.753107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 14:47:27.753144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 14:47:27.753183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 14:47:27.753218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 14:47:27.753261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 14:47:27.753295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 14:47:27.753421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 14:47:27.753472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 14:47:27.753511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 14:47:27.753578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 14:47:27.760628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1213 14:47:30.613394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.087591    4204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-711635\" not found" node="pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.786991    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.854582    4204 apiserver.go:52] "Watching apiserver"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.856943    4204 kubelet_node_status.go:124] "Node was previously registered" node="pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.857032    4204 kubelet_node_status.go:78] "Successfully registered node" node="pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.857056    4204 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.858614    4204 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.907667    4204 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.918195    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-711635\" already exists" pod="kube-system/kube-apiserver-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.918542    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.930914    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-711635\" already exists" pod="kube-system/kube-controller-manager-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.930951    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.944200    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-711635\" already exists" pod="kube-system/kube-scheduler-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.944284    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: E1213 14:47:27.969187    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-711635\" already exists" pod="kube-system/etcd-pause-711635"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.987457    4204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b82a9f3d-e529-4e43-bb38-6b5d2be9e874-lib-modules\") pod \"kube-proxy-ck5nd\" (UID: \"b82a9f3d-e529-4e43-bb38-6b5d2be9e874\") " pod="kube-system/kube-proxy-ck5nd"
	Dec 13 14:47:27 pause-711635 kubelet[4204]: I1213 14:47:27.987502    4204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b82a9f3d-e529-4e43-bb38-6b5d2be9e874-xtables-lock\") pod \"kube-proxy-ck5nd\" (UID: \"b82a9f3d-e529-4e43-bb38-6b5d2be9e874\") " pod="kube-system/kube-proxy-ck5nd"
	Dec 13 14:47:28 pause-711635 kubelet[4204]: I1213 14:47:28.087106    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-711635"
	Dec 13 14:47:28 pause-711635 kubelet[4204]: I1213 14:47:28.087993    4204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-711635"
	Dec 13 14:47:28 pause-711635 kubelet[4204]: E1213 14:47:28.101579    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-711635\" already exists" pod="kube-system/kube-apiserver-pause-711635"
	Dec 13 14:47:28 pause-711635 kubelet[4204]: E1213 14:47:28.103104    4204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-711635\" already exists" pod="kube-system/etcd-pause-711635"
	Dec 13 14:47:35 pause-711635 kubelet[4204]: E1213 14:47:35.041050    4204 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765637255040735169 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 14:47:35 pause-711635 kubelet[4204]: E1213 14:47:35.041071    4204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765637255040735169 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 14:47:45 pause-711635 kubelet[4204]: E1213 14:47:45.046874    4204 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765637265043822394 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 14:47:45 pause-711635 kubelet[4204]: E1213 14:47:45.047434    4204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765637265043822394 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-711635 -n pause-711635
helpers_test.go:270: (dbg) Run:  kubectl --context pause-711635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (82.32s)

                                                
                                    

Test pass (320/370)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 24.96
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.2/json-events 11.31
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.17
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.35.0-beta.0/json-events 11.01
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.67
31 TestOffline 87.22
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 126.66
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 11.52
44 TestAddons/parallel/Registry 18.3
45 TestAddons/parallel/RegistryCreds 0.65
47 TestAddons/parallel/InspektorGadget 11.89
48 TestAddons/parallel/MetricsServer 6.99
50 TestAddons/parallel/CSI 46.59
51 TestAddons/parallel/Headlamp 19.8
52 TestAddons/parallel/CloudSpanner 5.5
53 TestAddons/parallel/LocalPath 54.59
54 TestAddons/parallel/NvidiaDevicePlugin 6.49
55 TestAddons/parallel/Yakd 11.83
57 TestAddons/StoppedEnableDisable 88.1
58 TestCertOptions 78.26
59 TestCertExpiration 273.99
61 TestForceSystemdFlag 56.68
62 TestForceSystemdEnv 53.06
67 TestErrorSpam/setup 35.73
68 TestErrorSpam/start 0.33
69 TestErrorSpam/status 0.65
70 TestErrorSpam/pause 1.47
71 TestErrorSpam/unpause 1.67
72 TestErrorSpam/stop 5.31
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 73.99
77 TestFunctional/serial/AuditLog 0
79 TestFunctional/serial/KubeContext 0.05
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.93
84 TestFunctional/serial/CacheCmd/cache/add_local 2.19
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.36
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
92 TestFunctional/delete_echo-server_images 0
93 TestFunctional/delete_my-image_image 0
94 TestFunctional/delete_minikube_cached_images 0
98 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
99 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 71.87
100 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
101 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 30.34
102 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
103 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.07
106 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.21
107 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.19
108 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
109 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
110 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
111 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.55
112 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
113 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
114 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
115 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 33.61
116 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
117 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.2
118 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.28
119 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.09
121 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.42
122 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 39.1
123 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.24
124 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.13
125 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.71
129 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 12.39
130 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
131 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 46.85
133 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.32
134 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.07
135 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 30.84
136 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.19
137 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.07
141 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
143 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.35
145 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.39
146 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
147 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.45
148 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
149 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.18
150 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.18
151 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.2
152 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.04
153 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 1.98
154 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
155 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.07
156 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
166 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.33
167 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.97
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 2.2
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.83
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 8.26
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.57
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 24.19
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.31
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.3
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.29
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 8.76
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.2
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.25
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.24
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.23
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.23
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.4
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.03
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
191 TestMultiControlPlane/serial/StartCluster 203.24
192 TestMultiControlPlane/serial/DeployApp 7.19
193 TestMultiControlPlane/serial/PingHostFromPods 1.27
194 TestMultiControlPlane/serial/AddWorkerNode 70.81
195 TestMultiControlPlane/serial/NodeLabels 0.07
196 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
197 TestMultiControlPlane/serial/CopyFile 10.43
198 TestMultiControlPlane/serial/StopSecondaryNode 84.84
199 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.48
200 TestMultiControlPlane/serial/RestartSecondaryNode 35.54
201 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.82
202 TestMultiControlPlane/serial/RestartClusterKeepsNodes 348.74
203 TestMultiControlPlane/serial/DeleteSecondaryNode 18.2
204 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
205 TestMultiControlPlane/serial/StopCluster 258.98
206 TestMultiControlPlane/serial/RestartCluster 98.08
207 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.49
208 TestMultiControlPlane/serial/AddSecondaryNode 65.21
209 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.64
214 TestJSONOutput/start/Command 75.41
215 TestJSONOutput/start/Audit 0
217 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
218 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
220 TestJSONOutput/pause/Command 0.68
221 TestJSONOutput/pause/Audit 0
223 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
224 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
226 TestJSONOutput/unpause/Command 0.63
227 TestJSONOutput/unpause/Audit 0
229 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
230 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
232 TestJSONOutput/stop/Command 7.23
233 TestJSONOutput/stop/Audit 0
235 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
236 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
237 TestErrorJSONOutput 0.22
242 TestMainNoArgs 0.06
243 TestMinikubeProfile 73.37
246 TestMountStart/serial/StartWithMountFirst 19.16
247 TestMountStart/serial/VerifyMountFirst 0.3
248 TestMountStart/serial/StartWithMountSecond 19.03
249 TestMountStart/serial/VerifyMountSecond 0.3
250 TestMountStart/serial/DeleteFirst 0.67
251 TestMountStart/serial/VerifyMountPostDelete 0.3
252 TestMountStart/serial/Stop 1.21
253 TestMountStart/serial/RestartStopped 18.29
254 TestMountStart/serial/VerifyMountPostStop 0.29
257 TestMultiNode/serial/FreshStart2Nodes 94.36
258 TestMultiNode/serial/DeployApp2Nodes 6.22
259 TestMultiNode/serial/PingHostFrom2Pods 0.83
260 TestMultiNode/serial/AddNode 41.6
261 TestMultiNode/serial/MultiNodeLabels 0.06
262 TestMultiNode/serial/ProfileList 0.44
263 TestMultiNode/serial/CopyFile 5.9
264 TestMultiNode/serial/StopNode 2.12
265 TestMultiNode/serial/StartAfterStop 35.97
266 TestMultiNode/serial/RestartKeepsNodes 281.68
267 TestMultiNode/serial/DeleteNode 2.46
268 TestMultiNode/serial/StopMultiNode 160.74
269 TestMultiNode/serial/RestartMultiNode 90
270 TestMultiNode/serial/ValidateNameConflict 37.36
277 TestScheduledStopUnix 107.16
281 TestRunningBinaryUpgrade 409.31
283 TestKubernetesUpgrade 97.52
286 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
295 TestPause/serial/Start 98.98
296 TestNoKubernetes/serial/StartWithK8s 76.6
297 TestNoKubernetes/serial/StartWithStopK8s 6.1
298 TestNoKubernetes/serial/Start 19.73
299 TestStoppedBinaryUpgrade/Setup 3.81
300 TestStoppedBinaryUpgrade/Upgrade 85.22
302 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
303 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
304 TestNoKubernetes/serial/ProfileList 1.07
305 TestNoKubernetes/serial/Stop 1.31
306 TestNoKubernetes/serial/StartNoArgs 43.05
307 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
308 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
312 TestISOImage/Setup 22.05
317 TestNetworkPlugins/group/false 4.35
322 TestISOImage/Binaries/crictl 0.16
323 TestISOImage/Binaries/curl 0.17
324 TestISOImage/Binaries/docker 0.17
325 TestISOImage/Binaries/git 0.16
326 TestISOImage/Binaries/iptables 0.16
327 TestISOImage/Binaries/podman 0.16
328 TestISOImage/Binaries/rsync 0.16
329 TestISOImage/Binaries/socat 0.17
330 TestISOImage/Binaries/wget 0.16
331 TestISOImage/Binaries/VBoxControl 0.16
332 TestISOImage/Binaries/VBoxService 0.17
334 TestStartStop/group/old-k8s-version/serial/FirstStart 83.93
336 TestStartStop/group/no-preload/serial/FirstStart 86.53
337 TestStartStop/group/old-k8s-version/serial/DeployApp 10.38
338 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
339 TestStartStop/group/old-k8s-version/serial/Stop 83.7
341 TestStartStop/group/embed-certs/serial/FirstStart 74.53
342 TestStartStop/group/no-preload/serial/DeployApp 10.37
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.9
344 TestStartStop/group/no-preload/serial/Stop 82.55
345 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
346 TestStartStop/group/old-k8s-version/serial/SecondStart 43.94
348 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 98.76
349 TestStartStop/group/embed-certs/serial/DeployApp 13.66
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
351 TestStartStop/group/embed-certs/serial/Stop 88.55
352 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
353 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
354 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
355 TestStartStop/group/old-k8s-version/serial/Pause 2.36
357 TestStartStop/group/newest-cni/serial/FirstStart 38.2
358 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
359 TestStartStop/group/no-preload/serial/SecondStart 57.47
360 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
361 TestStartStop/group/newest-cni/serial/DeployApp 0
362 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
363 TestStartStop/group/newest-cni/serial/Stop 80.09
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 85.5
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
367 TestStartStop/group/embed-certs/serial/SecondStart 44.45
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
371 TestStartStop/group/no-preload/serial/Pause 2.69
372 TestNetworkPlugins/group/auto/Start 85.9
373 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
374 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
376 TestStartStop/group/newest-cni/serial/SecondStart 30.97
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
378 TestStartStop/group/embed-certs/serial/Pause 2.67
379 TestNetworkPlugins/group/kindnet/Start 69.12
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.14
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 63.5
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
385 TestStartStop/group/newest-cni/serial/Pause 3.34
386 TestNetworkPlugins/group/calico/Start 98.73
387 TestNetworkPlugins/group/auto/KubeletFlags 0.22
388 TestNetworkPlugins/group/auto/NetCatPod 14.32
389 TestNetworkPlugins/group/auto/DNS 0.17
390 TestNetworkPlugins/group/auto/Localhost 0.14
391 TestNetworkPlugins/group/auto/HairPin 0.13
392 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
393 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
394 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
395 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
396 TestNetworkPlugins/group/custom-flannel/Start 79.35
397 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
398 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
399 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.84
400 TestNetworkPlugins/group/kindnet/DNS 0.22
401 TestNetworkPlugins/group/kindnet/Localhost 0.18
402 TestNetworkPlugins/group/kindnet/HairPin 0.17
403 TestNetworkPlugins/group/bridge/Start 96.51
404 TestNetworkPlugins/group/flannel/Start 91.45
405 TestNetworkPlugins/group/calico/ControllerPod 6.01
406 TestNetworkPlugins/group/calico/KubeletFlags 0.18
407 TestNetworkPlugins/group/calico/NetCatPod 11.23
408 TestNetworkPlugins/group/calico/DNS 0.23
409 TestNetworkPlugins/group/calico/Localhost 0.15
410 TestNetworkPlugins/group/calico/HairPin 0.17
411 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
412 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
413 TestNetworkPlugins/group/enable-default-cni/Start 78.04
414 TestNetworkPlugins/group/custom-flannel/DNS 0.17
415 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
416 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
418 TestISOImage/PersistentMounts//data 0.17
419 TestISOImage/PersistentMounts//var/lib/docker 0.18
420 TestISOImage/PersistentMounts//var/lib/cni 0.19
421 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
422 TestISOImage/PersistentMounts//var/lib/minikube 0.19
423 TestISOImage/PersistentMounts//var/lib/toolbox 0.17
424 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
425 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
426 TestNetworkPlugins/group/bridge/NetCatPod 11.25
427 TestISOImage/VersionJSON 0.18
428 TestISOImage/eBPFSupport 0.17
429 TestNetworkPlugins/group/flannel/ControllerPod 6.01
430 TestNetworkPlugins/group/bridge/DNS 0.14
431 TestNetworkPlugins/group/bridge/Localhost 0.14
432 TestNetworkPlugins/group/bridge/HairPin 0.14
433 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
434 TestNetworkPlugins/group/flannel/NetCatPod 10.22
435 TestNetworkPlugins/group/flannel/DNS 0.16
436 TestNetworkPlugins/group/flannel/Localhost 0.14
437 TestNetworkPlugins/group/flannel/HairPin 0.13
438 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.17
439 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
440 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
441 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
442 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (24.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-721656 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-721656 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.955099927s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (24.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 13:05:39.548313  135234 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1213 13:05:39.548425  135234 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-721656
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-721656: exit status 85 (77.095234ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-721656 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-721656 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:05:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:05:14.646941  135247 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:05:14.647217  135247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:14.647227  135247 out.go:374] Setting ErrFile to fd 2...
	I1213 13:05:14.647231  135247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:14.647413  135247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	W1213 13:05:14.647539  135247 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22122-131207/.minikube/config/config.json: open /home/jenkins/minikube-integration/22122-131207/.minikube/config/config.json: no such file or directory
	I1213 13:05:14.648038  135247 out.go:368] Setting JSON to true
	I1213 13:05:14.649652  135247 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2855,"bootTime":1765628260,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:05:14.649717  135247 start.go:143] virtualization: kvm guest
	I1213 13:05:14.653326  135247 out.go:99] [download-only-721656] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1213 13:05:14.653492  135247 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 13:05:14.653546  135247 notify.go:221] Checking for updates...
	I1213 13:05:14.654597  135247 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:05:14.655671  135247 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:05:14.656793  135247 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 13:05:14.657818  135247 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:05:14.658898  135247 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:05:14.660742  135247 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:05:14.660950  135247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:05:15.124308  135247 out.go:99] Using the kvm2 driver based on user configuration
	I1213 13:05:15.124340  135247 start.go:309] selected driver: kvm2
	I1213 13:05:15.124347  135247 start.go:927] validating driver "kvm2" against <nil>
	I1213 13:05:15.124688  135247 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:05:15.125248  135247 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 13:05:15.125418  135247 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:05:15.125463  135247 cni.go:84] Creating CNI manager for ""
	I1213 13:05:15.125517  135247 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:05:15.125528  135247 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 13:05:15.125566  135247 start.go:353] cluster config:
	{Name:download-only-721656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-721656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:05:15.125726  135247 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:05:15.127500  135247 out.go:99] Downloading VM boot image ...
	I1213 13:05:15.127531  135247 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22122-131207/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso
	I1213 13:05:26.518484  135247 out.go:99] Starting "download-only-721656" primary control-plane node in "download-only-721656" cluster
	I1213 13:05:26.518537  135247 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 13:05:26.627690  135247 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:05:26.627720  135247 cache.go:65] Caching tarball of preloaded images
	I1213 13:05:26.628422  135247 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 13:05:26.630005  135247 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 13:05:26.630029  135247 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1213 13:05:26.741588  135247 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1213 13:05:26.741706  135247 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-721656 host does not exist
	  To start a cluster, run: "minikube start -p download-only-721656"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-721656
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (11.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-637501 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-637501 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.307901245s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (11.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 13:05:51.260049  135234 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 13:05:51.260122  135234 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-637501
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-637501: exit status 85 (78.053535ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-721656 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-721656 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:05 UTC │
	│ delete  │ -p download-only-721656                                                                                                                                                 │ download-only-721656 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:05 UTC │
	│ start   │ -o=json --download-only -p download-only-637501 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-637501 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:05:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:05:40.007430  135507 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:05:40.008001  135507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:40.008013  135507 out.go:374] Setting ErrFile to fd 2...
	I1213 13:05:40.008017  135507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:40.008259  135507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 13:05:40.008761  135507 out.go:368] Setting JSON to true
	I1213 13:05:40.009798  135507 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2880,"bootTime":1765628260,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:05:40.009870  135507 start.go:143] virtualization: kvm guest
	I1213 13:05:40.011815  135507 out.go:99] [download-only-637501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:05:40.012019  135507 notify.go:221] Checking for updates...
	I1213 13:05:40.013289  135507 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:05:40.014727  135507 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:05:40.016012  135507 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 13:05:40.017134  135507 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:05:40.018233  135507 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:05:40.020107  135507 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:05:40.020427  135507 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:05:40.055618  135507 out.go:99] Using the kvm2 driver based on user configuration
	I1213 13:05:40.055652  135507 start.go:309] selected driver: kvm2
	I1213 13:05:40.055659  135507 start.go:927] validating driver "kvm2" against <nil>
	I1213 13:05:40.055991  135507 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:05:40.056513  135507 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 13:05:40.056650  135507 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:05:40.056688  135507 cni.go:84] Creating CNI manager for ""
	I1213 13:05:40.056743  135507 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:05:40.056754  135507 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 13:05:40.056817  135507 start.go:353] cluster config:
	{Name:download-only-637501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-637501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:05:40.056941  135507 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:05:40.058135  135507 out.go:99] Starting "download-only-637501" primary control-plane node in "download-only-637501" cluster
	I1213 13:05:40.058171  135507 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:05:40.577482  135507 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:05:40.577520  135507 cache.go:65] Caching tarball of preloaded images
	I1213 13:05:40.577739  135507 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:05:40.579433  135507 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1213 13:05:40.579468  135507 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1213 13:05:40.689297  135507 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1213 13:05:40.689360  135507 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-637501 host does not exist
	  To start a cluster, run: "minikube start -p download-only-637501"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-637501
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (11.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-059438 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-059438 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.009520183s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (11.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 13:06:02.685036  135234 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1213 13:06:02.685106  135234 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-059438
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-059438: exit status 85 (73.924847ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-721656 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-721656 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:05 UTC │
	│ delete  │ -p download-only-721656                                                                                                                                                        │ download-only-721656 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:05 UTC │
	│ start   │ -o=json --download-only -p download-only-637501 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-637501 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:05 UTC │
	│ delete  │ -p download-only-637501                                                                                                                                                        │ download-only-637501 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │ 13 Dec 25 13:05 UTC │
	│ start   │ -o=json --download-only -p download-only-059438 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-059438 │ jenkins │ v1.37.0 │ 13 Dec 25 13:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:05:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:05:51.732771  135705 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:05:51.733053  135705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:51.733062  135705 out.go:374] Setting ErrFile to fd 2...
	I1213 13:05:51.733067  135705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:05:51.733335  135705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 13:05:51.733846  135705 out.go:368] Setting JSON to true
	I1213 13:05:51.735309  135705 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2892,"bootTime":1765628260,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:05:51.735487  135705 start.go:143] virtualization: kvm guest
	I1213 13:05:51.737143  135705 out.go:99] [download-only-059438] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:05:51.737346  135705 notify.go:221] Checking for updates...
	I1213 13:05:51.738200  135705 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:05:51.739386  135705 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:05:51.740658  135705 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 13:05:51.741814  135705 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 13:05:51.742892  135705 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:05:51.744743  135705 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:05:51.745028  135705 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:05:51.778018  135705 out.go:99] Using the kvm2 driver based on user configuration
	I1213 13:05:51.778051  135705 start.go:309] selected driver: kvm2
	I1213 13:05:51.778059  135705 start.go:927] validating driver "kvm2" against <nil>
	I1213 13:05:51.778464  135705 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:05:51.778995  135705 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 13:05:51.779184  135705 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:05:51.779210  135705 cni.go:84] Creating CNI manager for ""
	I1213 13:05:51.779280  135705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 13:05:51.779292  135705 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 13:05:51.779365  135705 start.go:353] cluster config:
	{Name:download-only-059438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-059438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:05:51.779482  135705 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:05:51.780724  135705 out.go:99] Starting "download-only-059438" primary control-plane node in "download-only-059438" cluster
	I1213 13:05:51.780746  135705 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:05:51.883847  135705 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:05:51.883888  135705 cache.go:65] Caching tarball of preloaded images
	I1213 13:05:51.884192  135705 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:05:51.885984  135705 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1213 13:05:51.886011  135705 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1213 13:05:52.000904  135705 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1213 13:05:52.000958  135705 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:06:01.713017  135705 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:06:01.713516  135705 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/download-only-059438/config.json ...
	I1213 13:06:01.713565  135705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/download-only-059438/config.json: {Name:mk8d38a858b4f030f1aa277644f9723590e41c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:06:01.713773  135705 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:06:01.714004  135705 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22122-131207/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-059438 host does not exist
	  To start a cluster, run: "minikube start -p download-only-059438"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-059438
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 13:06:03.533502  135234 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-716159 --alsologtostderr --binary-mirror http://127.0.0.1:33249 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-716159" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-716159
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (87.22s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-196030 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-196030 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.31449576s)
helpers_test.go:176: Cleaning up "offline-crio-196030" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-196030
--- PASS: TestOffline (87.22s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-685870
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-685870: exit status 85 (66.853222ms)

                                                
                                                
-- stdout --
	* Profile "addons-685870" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-685870"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-685870
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-685870: exit status 85 (67.534734ms)

                                                
                                                
-- stdout --
	* Profile "addons-685870" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-685870"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (126.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-685870 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-685870 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.660737929s)
--- PASS: TestAddons/Setup (126.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-685870 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-685870 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-685870 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-685870 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [28996d9e-2b5f-4e3c-b142-b2a3308dd12c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [28996d9e-2b5f-4e3c-b142-b2a3308dd12c] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003845437s
addons_test.go:696: (dbg) Run:  kubectl --context addons-685870 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-685870 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-685870 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.132803ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I1213 13:08:31.765586  135234 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 13:08:31.765613  135234 kapi.go:107] duration metric: took 7.891887ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:353: "registry-6b586f9694-4xd6c" [42f338ba-b090-4f81-ad48-bcb9795e19cb] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00347444s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-ww99f" [b233ab84-669c-4f80-a75e-051ffeafc9b4] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008624004s
addons_test.go:394: (dbg) Run:  kubectl --context addons-685870 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-685870 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-685870 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.549626634s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 ip
2025/12/13 13:08:49 [DEBUG] GET http://192.168.39.155:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.30s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 6.039828ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-685870
addons_test.go:334: (dbg) Run:  kubectl --context addons-685870 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-f45zl" [cd42526d-0188-440f-b630-68c53450c546] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003941591s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 addons disable inspektor-gadget --alsologtostderr -v=1: (5.884505762s)
--- PASS: TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 6.704623ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-xqtfb" [2329f277-682f-41d0-9879-ac4768581afd] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004121098s
addons_test.go:465: (dbg) Run:  kubectl --context addons-685870 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 13:08:31.757732  135234 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.903356ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-685870 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-685870 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [43742de6-d6ec-4c6a-acbb-c70c52e04b02] Pending
helpers_test.go:353: "task-pv-pod" [43742de6-d6ec-4c6a-acbb-c70c52e04b02] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [43742de6-d6ec-4c6a-acbb-c70c52e04b02] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.005464586s
addons_test.go:574: (dbg) Run:  kubectl --context addons-685870 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-685870 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-685870 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-685870 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-685870 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-685870 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-685870 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [8ef3ed15-1b50-440e-9ba4-fa4e4784a04a] Pending
helpers_test.go:353: "task-pv-pod-restore" [8ef3ed15-1b50-440e-9ba4-fa4e4784a04a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [8ef3ed15-1b50-440e-9ba4-fa4e4784a04a] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003798239s
addons_test.go:616: (dbg) Run:  kubectl --context addons-685870 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-685870 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-685870 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.684339733s)
--- PASS: TestAddons/parallel/CSI (46.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-685870 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-glmdm" [81ae4265-de1c-4b07-b160-e2b96dffc898] Pending
helpers_test.go:353: "headlamp-dfcdc64b-glmdm" [81ae4265-de1c-4b07-b160-e2b96dffc898] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-glmdm" [81ae4265-de1c-4b07-b160-e2b96dffc898] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003377986s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 addons disable headlamp --alsologtostderr -v=1: (5.941649058s)
--- PASS: TestAddons/parallel/Headlamp (19.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-btmmz" [faec37b3-408d-4c32-8648-f25311bcc19c] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003994667s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.59s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-685870 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-685870 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-685870 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [bf48ceae-9a7e-461c-b5a7-1b319485073d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [bf48ceae-9a7e-461c-b5a7-1b319485073d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [bf48ceae-9a7e-461c-b5a7-1b319485073d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0030475s
addons_test.go:969: (dbg) Run:  kubectl --context addons-685870 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 ssh "cat /opt/local-path-provisioner/pvc-ebf86252-4882-4e05-b2c9-1d3fc597ad06_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-685870 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-685870 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.845527574s)
--- PASS: TestAddons/parallel/LocalPath (54.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-k6r7t" [fabec6f5-3861-4173-b733-8b09a8eeddfa] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004736198s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-2rwd5" [7f71d7f1-2b65-45ae-9bff-2bf15bf59393] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003599023s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 addons disable yakd --alsologtostderr -v=1: (5.825978759s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (88.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-685870
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-685870: (1m27.894883219s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-685870
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-685870
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-685870
--- PASS: TestAddons/StoppedEnableDisable (88.10s)

                                                
                                    
x
+
TestCertOptions (78.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-513755 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1213 14:48:12.153864  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:48:12.585696  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-513755 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m16.808550585s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-513755 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-513755 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-513755 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-513755" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-513755
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-513755: (1.063179799s)
--- PASS: TestCertOptions (78.26s)

                                                
                                    
x
+
TestCertExpiration (273.99s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-397503 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-397503 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (47.195521077s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-397503 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-397503 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (45.988946951s)
helpers_test.go:176: Cleaning up "cert-expiration-397503" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-397503
--- PASS: TestCertExpiration (273.99s)

                                                
                                    
x
+
TestForceSystemdFlag (56.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-359885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-359885 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.557598952s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-359885 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-359885" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-359885
--- PASS: TestForceSystemdFlag (56.68s)

                                                
                                    
x
+
TestForceSystemdEnv (53.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-936726 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-936726 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.164262318s)
helpers_test.go:176: Cleaning up "force-systemd-env-936726" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-936726
--- PASS: TestForceSystemdEnv (53.06s)

                                                
                                    
x
+
TestErrorSpam/setup (35.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-339903 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-339903 --driver=kvm2  --container-runtime=crio
E1213 13:13:12.154424  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:12.163907  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:12.176032  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:12.197569  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:12.239034  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:12.320559  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:12.482117  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:12.803813  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:13.445898  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:14.727509  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:17.290450  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-339903 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-339903 --driver=kvm2  --container-runtime=crio: (35.730017025s)
--- PASS: TestErrorSpam/setup (35.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 pause
E1213 13:13:22.411783  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (5.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 stop: (1.873189764s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 stop: (1.996120624s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-339903 --log_dir /tmp/nospam-339903 stop: (1.437962206s)
--- PASS: TestErrorSpam/stop (5.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101171 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1213 13:13:32.653152  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:53.134962  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:14:34.097716  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-101171 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m13.990309764s)
--- PASS: TestFunctional/serial/StartWithProxy (73.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-101171 cache add registry.k8s.io/pause:3.1: (1.022879029s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-101171 /tmp/TestFunctionalserialCacheCmdcacheadd_local936466381/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 cache add minikube-local-cache-test:functional-101171
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-101171 cache add minikube-local-cache-test:functional-101171: (1.866671546s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 cache delete minikube-local-cache-test:functional-101171
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-101171
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101171 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (166.021454ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-101171 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:1.0: context deadline exceeded (1.439µs)
functional_test.go:207: failed to remove image "kicbase/echo-server:1.0" from docker images. args "docker rmi -f kicbase/echo-server:1.0": context deadline exceeded
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-101171
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:functional-101171: context deadline exceeded (485ns)
functional_test.go:207: failed to remove image "kicbase/echo-server:functional-101171" from docker images. args "docker rmi -f kicbase/echo-server:functional-101171": context deadline exceeded
--- PASS: TestFunctional/delete_echo-server_images (0.00s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-101171
functional_test.go:213: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-101171: context deadline exceeded (819ns)
functional_test.go:215: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-101171": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.00s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-101171
functional_test.go:221: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-101171: context deadline exceeded (1.099µs)
functional_test.go:223: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-101171": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-131207/.minikube/files/etc/test/nested/copy/135234/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (71.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-359736 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-359736 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m11.864609116s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (71.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (30.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 14:01:53.628798  135234 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-359736 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-359736 --alsologtostderr -v=8: (30.339070803s)
functional_test.go:678: soft start took 30.339527817s for "functional-359736" cluster.
I1213 14:02:23.968253  135234 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (30.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-359736 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 cache add registry.k8s.io/pause:3.1: (1.006195872s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 cache add registry.k8s.io/pause:3.3: (1.115191678s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 cache add registry.k8s.io/pause:latest: (1.088836937s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1503627041/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 cache add minikube-local-cache-test:functional-359736
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 cache add minikube-local-cache-test:functional-359736: (1.904495969s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 cache delete minikube-local-cache-test:functional-359736
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-359736
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.325909ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 kubectl -- --context functional-359736 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-359736 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (33.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-359736 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 14:02:55.236530  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-359736 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.609420894s)
functional_test.go:776: restart took 33.609555518s for "functional-359736" cluster.
I1213 14:03:05.335846  135234 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (33.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-359736 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 logs: (1.195928979s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2072556460/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2072556460/001/logs.txt: (1.27551008s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-359736 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-359736
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-359736: exit status 115 (278.752693ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.132:31075 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-359736 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 config get cpus: exit status 14 (60.914538ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 config get cpus: exit status 14 (75.755053ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (39.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-359736 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-359736 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 151388: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (39.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-359736 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-359736 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (121.43945ms)

                                                
                                                
-- stdout --
	* [functional-359736] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:03:14.804915  151302 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:03:14.805218  151302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:03:14.805229  151302 out.go:374] Setting ErrFile to fd 2...
	I1213 14:03:14.805233  151302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:03:14.805484  151302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:03:14.805953  151302 out.go:368] Setting JSON to false
	I1213 14:03:14.806793  151302 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6335,"bootTime":1765628260,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 14:03:14.806847  151302 start.go:143] virtualization: kvm guest
	I1213 14:03:14.808517  151302 out.go:179] * [functional-359736] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 14:03:14.809727  151302 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:03:14.809743  151302 notify.go:221] Checking for updates...
	I1213 14:03:14.812043  151302 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:03:14.813394  151302 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:03:14.814447  151302 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:03:14.815496  151302 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 14:03:14.816579  151302 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:03:14.817936  151302 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 14:03:14.818484  151302 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:03:14.850566  151302 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 14:03:14.851684  151302 start.go:309] selected driver: kvm2
	I1213 14:03:14.851704  151302 start.go:927] validating driver "kvm2" against &{Name:functional-359736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-359736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:03:14.851804  151302 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:03:14.853667  151302 out.go:203] 
	W1213 14:03:14.854560  151302 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 14:03:14.855517  151302 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-359736 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-359736 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-359736 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (131.737902ms)

                                                
                                                
-- stdout --
	* [functional-359736] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:03:14.678444  151287 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:03:14.678731  151287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:03:14.678744  151287 out.go:374] Setting ErrFile to fd 2...
	I1213 14:03:14.678752  151287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:03:14.679122  151287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:03:14.679632  151287 out.go:368] Setting JSON to false
	I1213 14:03:14.680516  151287 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6335,"bootTime":1765628260,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 14:03:14.680581  151287 start.go:143] virtualization: kvm guest
	I1213 14:03:14.682737  151287 out.go:179] * [functional-359736] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 14:03:14.684143  151287 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:03:14.684145  151287 notify.go:221] Checking for updates...
	I1213 14:03:14.686583  151287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:03:14.687935  151287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:03:14.689009  151287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:03:14.690096  151287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 14:03:14.691237  151287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:03:14.693069  151287 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 14:03:14.693797  151287 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:03:14.727114  151287 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 14:03:14.728218  151287 start.go:309] selected driver: kvm2
	I1213 14:03:14.728239  151287 start.go:927] validating driver "kvm2" against &{Name:functional-359736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-359736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:03:14.728385  151287 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:03:14.730350  151287 out.go:203] 
	W1213 14:03:14.731393  151287 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 14:03:14.732408  151287 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (12.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-359736 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-359736 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-8l8rj" [b661980d-96b4-4b82-8ddf-51780043333b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-8l8rj" [b661980d-96b4-4b82-8ddf-51780043333b] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004914305s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.132:30725
functional_test.go:1680: http://192.168.39.132:30725: success! body:
Request served by hello-node-connect-9f67c86d4-8l8rj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.132:30725
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (12.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (46.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [7b8a501f-8066-45d4-bb81-42c282bd1583] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006196346s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-359736 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-359736 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-359736 get pvc myclaim -o=json
I1213 14:03:19.037256  135234 retry.go:31] will retry after 1.448917111s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:14209840-8dc9-42c9-84e2-d66a9b51060f ResourceVersion:795 Generation:0 CreationTimestamp:2025-12-13 14:03:18 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a08810 VolumeMode:0xc001a08820 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-359736 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-359736 apply -f testdata/storage-provisioner/pod.yaml
I1213 14:03:20.697993  135234 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [97abff9e-53bb-4399-a54b-41200d0cf86e] Pending
helpers_test.go:353: "sp-pod" [97abff9e-53bb-4399-a54b-41200d0cf86e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [97abff9e-53bb-4399-a54b-41200d0cf86e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 32.00348969s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-359736 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-359736 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-359736 apply -f testdata/storage-provisioner/pod.yaml
I1213 14:03:53.429601  135234 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4a3d8a9e-208a-4cd7-92d3-bb903b54880c] Pending
2025/12/13 14:03:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "sp-pod" [4a3d8a9e-208a-4cd7-92d3-bb903b54880c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005989358s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-359736 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (46.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh -n functional-359736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 cp functional-359736:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp738317346/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh -n functional-359736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh -n functional-359736 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (30.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-359736 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-4zhbn" [f6b4e28f-0e4c-4ea3-8ff4-efa183e73933] Pending
helpers_test.go:353: "mysql-7d7b65bc95-4zhbn" [f6b4e28f-0e4c-4ea3-8ff4-efa183e73933] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-4zhbn" [f6b4e28f-0e4c-4ea3-8ff4-efa183e73933] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 24.018259836s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-359736 exec mysql-7d7b65bc95-4zhbn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-359736 exec mysql-7d7b65bc95-4zhbn -- mysql -ppassword -e "show databases;": exit status 1 (312.120163ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 14:03:36.915842  135234 retry.go:31] will retry after 869.217962ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-359736 exec mysql-7d7b65bc95-4zhbn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-359736 exec mysql-7d7b65bc95-4zhbn -- mysql -ppassword -e "show databases;": exit status 1 (302.059531ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 14:03:38.088200  135234 retry.go:31] will retry after 2.156572034s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-359736 exec mysql-7d7b65bc95-4zhbn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-359736 exec mysql-7d7b65bc95-4zhbn -- mysql -ppassword -e "show databases;": exit status 1 (414.466055ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 14:03:40.660280  135234 retry.go:31] will retry after 2.438771002s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-359736 exec mysql-7d7b65bc95-4zhbn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (30.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/135234/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo cat /etc/test/nested/copy/135234/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/135234.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo cat /etc/ssl/certs/135234.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/135234.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo cat /usr/share/ca-certificates/135234.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1352342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo cat /etc/ssl/certs/1352342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1352342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo cat /usr/share/ca-certificates/1352342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-359736 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 ssh "sudo systemctl is-active docker": exit status 1 (175.498421ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo systemctl is-active containerd"
E1213 14:03:12.154365  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 ssh "sudo systemctl is-active containerd": exit status 1 (171.114892ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-359736 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-359736
localhost/kicbase/echo-server:functional-359736
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-359736 image ls --format short --alsologtostderr:
I1213 14:03:56.411846  152063 out.go:360] Setting OutFile to fd 1 ...
I1213 14:03:56.412162  152063 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:56.412176  152063 out.go:374] Setting ErrFile to fd 2...
I1213 14:03:56.412183  152063 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:56.412379  152063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
I1213 14:03:56.412952  152063 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:56.413087  152063 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:56.415475  152063 ssh_runner.go:195] Run: systemctl --version
I1213 14:03:56.417685  152063 main.go:143] libmachine: domain functional-359736 has defined MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:56.418182  152063 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:32:ef:8d", ip: ""} in network mk-functional-359736: {Iface:virbr1 ExpiryTime:2025-12-13 15:00:56 +0000 UTC Type:0 Mac:52:54:00:32:ef:8d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-359736 Clientid:01:52:54:00:32:ef:8d}
I1213 14:03:56.418207  152063 main.go:143] libmachine: domain functional-359736 has defined IP address 192.168.39.132 and MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:56.418375  152063 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-359736/id_rsa Username:docker}
I1213 14:03:56.521194  152063 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-359736 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-359736  │ 27593b03169e1 │ 3.33kB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-359736  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-359736 image ls --format table --alsologtostderr:
I1213 14:03:58.579946  152162 out.go:360] Setting OutFile to fd 1 ...
I1213 14:03:58.580231  152162 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:58.580241  152162 out.go:374] Setting ErrFile to fd 2...
I1213 14:03:58.580246  152162 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:58.580441  152162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
I1213 14:03:58.581091  152162 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:58.581201  152162 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:58.583602  152162 ssh_runner.go:195] Run: systemctl --version
I1213 14:03:58.586319  152162 main.go:143] libmachine: domain functional-359736 has defined MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:58.586645  152162 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:32:ef:8d", ip: ""} in network mk-functional-359736: {Iface:virbr1 ExpiryTime:2025-12-13 15:00:56 +0000 UTC Type:0 Mac:52:54:00:32:ef:8d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-359736 Clientid:01:52:54:00:32:ef:8d}
I1213 14:03:58.586665  152162 main.go:143] libmachine: domain functional-359736 has defined IP address 192.168.39.132 and MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:58.586780  152162 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-359736/id_rsa Username:docker}
I1213 14:03:58.665510  152162 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-359736 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"r
epoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io
/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui
/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"27593b03169e1e97a322c56a6aab688ec1f2b33ccf0bc9bb046e363ff0e8694e","repoDigests":["localhost/minikube-local-cache-test@sha256:6d9c63348cdf663dfc7fb4ca6c95e7b4c52b098f2b09f2043eb11de7137d36e0"],"repoTags":["localhost/minikube-local-cache-test:functional-359736"],"size":"3330"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/
nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ad
a2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"
],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-359736"],"size":"4943877"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-359736 image ls --format json --alsologtostderr:
I1213 14:03:58.399533  152151 out.go:360] Setting OutFile to fd 1 ...
I1213 14:03:58.399620  152151 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:58.399625  152151 out.go:374] Setting ErrFile to fd 2...
I1213 14:03:58.399629  152151 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:58.399795  152151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
I1213 14:03:58.400374  152151 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:58.400466  152151 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:58.402388  152151 ssh_runner.go:195] Run: systemctl --version
I1213 14:03:58.404445  152151 main.go:143] libmachine: domain functional-359736 has defined MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:58.404843  152151 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:32:ef:8d", ip: ""} in network mk-functional-359736: {Iface:virbr1 ExpiryTime:2025-12-13 15:00:56 +0000 UTC Type:0 Mac:52:54:00:32:ef:8d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-359736 Clientid:01:52:54:00:32:ef:8d}
I1213 14:03:58.404871  152151 main.go:143] libmachine: domain functional-359736 has defined IP address 192.168.39.132 and MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:58.404998  152151 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-359736/id_rsa Username:docker}
I1213 14:03:58.484052  152151 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-359736 image ls --format yaml --alsologtostderr:
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-359736
size: "4943877"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 27593b03169e1e97a322c56a6aab688ec1f2b33ccf0bc9bb046e363ff0e8694e
repoDigests:
- localhost/minikube-local-cache-test@sha256:6d9c63348cdf663dfc7fb4ca6c95e7b4c52b098f2b09f2043eb11de7137d36e0
repoTags:
- localhost/minikube-local-cache-test:functional-359736
size: "3330"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-359736 image ls --format yaml --alsologtostderr:
I1213 14:03:56.636852  152086 out.go:360] Setting OutFile to fd 1 ...
I1213 14:03:56.636995  152086 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:56.637008  152086 out.go:374] Setting ErrFile to fd 2...
I1213 14:03:56.637015  152086 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:56.637413  152086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
I1213 14:03:56.638252  152086 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:56.638408  152086 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:56.641039  152086 ssh_runner.go:195] Run: systemctl --version
I1213 14:03:56.643584  152086 main.go:143] libmachine: domain functional-359736 has defined MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:56.644102  152086 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:32:ef:8d", ip: ""} in network mk-functional-359736: {Iface:virbr1 ExpiryTime:2025-12-13 15:00:56 +0000 UTC Type:0 Mac:52:54:00:32:ef:8d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-359736 Clientid:01:52:54:00:32:ef:8d}
I1213 14:03:56.644142  152086 main.go:143] libmachine: domain functional-359736 has defined IP address 192.168.39.132 and MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:56.644313  152086 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-359736/id_rsa Username:docker}
I1213 14:03:56.736819  152086 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 ssh pgrep buildkitd: exit status 1 (152.805131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image build -t localhost/my-image:functional-359736 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 image build -t localhost/my-image:functional-359736 testdata/build --alsologtostderr: (3.702555715s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-359736 image build -t localhost/my-image:functional-359736 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d239e57e537
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-359736
--> 88f0861829c
Successfully tagged localhost/my-image:functional-359736
88f0861829c2321b0d08142ff4826cc8d7391cb4f457dfc997da5c6637313a8f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-359736 image build -t localhost/my-image:functional-359736 testdata/build --alsologtostderr:
I1213 14:03:56.986626  152107 out.go:360] Setting OutFile to fd 1 ...
I1213 14:03:56.986902  152107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:56.986914  152107 out.go:374] Setting ErrFile to fd 2...
I1213 14:03:56.986918  152107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:03:56.987171  152107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
I1213 14:03:56.987732  152107 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:56.988456  152107 config.go:182] Loaded profile config "functional-359736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 14:03:56.990570  152107 ssh_runner.go:195] Run: systemctl --version
I1213 14:03:56.992442  152107 main.go:143] libmachine: domain functional-359736 has defined MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:56.992792  152107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:32:ef:8d", ip: ""} in network mk-functional-359736: {Iface:virbr1 ExpiryTime:2025-12-13 15:00:56 +0000 UTC Type:0 Mac:52:54:00:32:ef:8d Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:functional-359736 Clientid:01:52:54:00:32:ef:8d}
I1213 14:03:56.992824  152107 main.go:143] libmachine: domain functional-359736 has defined IP address 192.168.39.132 and MAC address 52:54:00:32:ef:8d in network mk-functional-359736
I1213 14:03:56.992971  152107 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/functional-359736/id_rsa Username:docker}
I1213 14:03:57.070901  152107 build_images.go:162] Building image from path: /tmp/build.3453371712.tar
I1213 14:03:57.070978  152107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 14:03:57.085534  152107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3453371712.tar
I1213 14:03:57.090062  152107 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3453371712.tar: stat -c "%s %y" /var/lib/minikube/build/build.3453371712.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3453371712.tar': No such file or directory
I1213 14:03:57.090099  152107 ssh_runner.go:362] scp /tmp/build.3453371712.tar --> /var/lib/minikube/build/build.3453371712.tar (3072 bytes)
I1213 14:03:57.120489  152107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3453371712
I1213 14:03:57.132134  152107 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3453371712 -xf /var/lib/minikube/build/build.3453371712.tar
I1213 14:03:57.143191  152107 crio.go:315] Building image: /var/lib/minikube/build/build.3453371712
I1213 14:03:57.143250  152107 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-359736 /var/lib/minikube/build/build.3453371712 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 14:04:00.600128  152107 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-359736 /var/lib/minikube/build/build.3453371712 --cgroup-manager=cgroupfs: (3.45685189s)
I1213 14:04:00.600210  152107 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3453371712
I1213 14:04:00.613675  152107 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3453371712.tar
I1213 14:04:00.625390  152107 build_images.go:218] Built localhost/my-image:functional-359736 from /tmp/build.3453371712.tar
I1213 14:04:00.625439  152107 build_images.go:134] succeeded building to: functional-359736
I1213 14:04:00.625447  152107 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.963143509s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-359736
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image load --daemon kicbase/echo-server:functional-359736 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 image load --daemon kicbase/echo-server:functional-359736 --alsologtostderr: (1.130529762s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image load --daemon kicbase/echo-server:functional-359736 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-359736
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image load --daemon kicbase/echo-server:functional-359736 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image save kicbase/echo-server:functional-359736 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (8.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (8.049264488s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (8.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-359736
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 image save --daemon kicbase/echo-server:functional-359736 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-359736
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (24.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-359736 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-359736 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-fwc8h" [5cff8a73-c7c5-4f84-b76e-94ef452308cb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-fwc8h" [5cff8a73-c7c5-4f84-b76e-94ef452308cb] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 24.004907487s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (24.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "233.443575ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.969782ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "228.415574ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.861927ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo396747127/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765634634982745059" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo396747127/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765634634982745059" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo396747127/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765634634982745059" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo396747127/001/test-1765634634982745059
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (165.353935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 14:03:55.148431  135234 retry.go:31] will retry after 314.571487ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 14:03 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 14:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 14:03 test-1765634634982745059
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh cat /mount-9p/test-1765634634982745059
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-359736 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [7d6cca53-800f-43aa-9ec9-7c0d4edaf347] Pending
helpers_test.go:353: "busybox-mount" [7d6cca53-800f-43aa-9ec9-7c0d4edaf347] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [7d6cca53-800f-43aa-9ec9-7c0d4edaf347] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [7d6cca53-800f-43aa-9ec9-7c0d4edaf347] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005156791s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-359736 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo396747127/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 service list: (1.204084045s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-359736 service list -o json: (1.246634307s)
functional_test.go:1504: Took "1.246737995s" to run "out/minikube-linux-amd64 -p functional-359736 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.132:30830
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.132:30830
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2763885901/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (151.973255ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 14:04:03.894780  135234 retry.go:31] will retry after 585.613208ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2763885901/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 ssh "sudo umount -f /mount-9p": exit status 1 (151.900161ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-359736 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2763885901/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1196278764/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1196278764/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1196278764/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T" /mount1: exit status 1 (164.217521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 14:04:05.310956  135234 retry.go:31] will retry after 345.270573ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-359736 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-359736 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1196278764/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1196278764/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-359736 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1196278764/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-359736
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-359736
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-359736
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m22.707953909s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 kubectl -- rollout status deployment/busybox: (4.914587958s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-29t4k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-576j9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-j82fn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-29t4k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-576j9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-j82fn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-29t4k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-576j9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-j82fn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-29t4k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-29t4k -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-576j9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-576j9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-j82fn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 kubectl -- exec busybox-7b57f96db7-j82fn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (70.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 node add --alsologtostderr -v 5
E1213 14:08:12.154360  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:12.586269  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:12.592709  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:12.604133  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:12.625629  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:12.667121  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:12.748613  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:12.910161  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:13.231927  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:13.874097  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:15.156105  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:17.718204  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:22.840471  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:08:33.081888  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 node add --alsologtostderr -v 5: (1m10.19165057s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (70.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-959286 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp testdata/cp-test.txt ha-959286:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile267126526/001/cp-test_ha-959286.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286:/home/docker/cp-test.txt ha-959286-m02:/home/docker/cp-test_ha-959286_ha-959286-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m02 "sudo cat /home/docker/cp-test_ha-959286_ha-959286-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286:/home/docker/cp-test.txt ha-959286-m03:/home/docker/cp-test_ha-959286_ha-959286-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m03 "sudo cat /home/docker/cp-test_ha-959286_ha-959286-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286:/home/docker/cp-test.txt ha-959286-m04:/home/docker/cp-test_ha-959286_ha-959286-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m04 "sudo cat /home/docker/cp-test_ha-959286_ha-959286-m04.txt"
E1213 14:08:53.564217  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp testdata/cp-test.txt ha-959286-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile267126526/001/cp-test_ha-959286-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m02:/home/docker/cp-test.txt ha-959286:/home/docker/cp-test_ha-959286-m02_ha-959286.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286 "sudo cat /home/docker/cp-test_ha-959286-m02_ha-959286.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m02:/home/docker/cp-test.txt ha-959286-m03:/home/docker/cp-test_ha-959286-m02_ha-959286-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m03 "sudo cat /home/docker/cp-test_ha-959286-m02_ha-959286-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m02:/home/docker/cp-test.txt ha-959286-m04:/home/docker/cp-test_ha-959286-m02_ha-959286-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m04 "sudo cat /home/docker/cp-test_ha-959286-m02_ha-959286-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp testdata/cp-test.txt ha-959286-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile267126526/001/cp-test_ha-959286-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m03:/home/docker/cp-test.txt ha-959286:/home/docker/cp-test_ha-959286-m03_ha-959286.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286 "sudo cat /home/docker/cp-test_ha-959286-m03_ha-959286.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m03:/home/docker/cp-test.txt ha-959286-m02:/home/docker/cp-test_ha-959286-m03_ha-959286-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m02 "sudo cat /home/docker/cp-test_ha-959286-m03_ha-959286-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m03:/home/docker/cp-test.txt ha-959286-m04:/home/docker/cp-test_ha-959286-m03_ha-959286-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m04 "sudo cat /home/docker/cp-test_ha-959286-m03_ha-959286-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp testdata/cp-test.txt ha-959286-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile267126526/001/cp-test_ha-959286-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m04:/home/docker/cp-test.txt ha-959286:/home/docker/cp-test_ha-959286-m04_ha-959286.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286 "sudo cat /home/docker/cp-test_ha-959286-m04_ha-959286.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m04:/home/docker/cp-test.txt ha-959286-m02:/home/docker/cp-test_ha-959286-m04_ha-959286-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m02 "sudo cat /home/docker/cp-test_ha-959286-m04_ha-959286-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 cp ha-959286-m04:/home/docker/cp-test.txt ha-959286-m03:/home/docker/cp-test_ha-959286-m04_ha-959286-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 ssh -n ha-959286-m03 "sudo cat /home/docker/cp-test_ha-959286-m04_ha-959286-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (84.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 node stop m02 --alsologtostderr -v 5
E1213 14:09:34.527040  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 node stop m02 --alsologtostderr -v 5: (1m24.373210836s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5: exit status 7 (468.000427ms)

                                                
                                                
-- stdout --
	ha-959286
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-959286-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-959286-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-959286-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:10:25.375932  155456 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:10:25.376062  155456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:10:25.376087  155456 out.go:374] Setting ErrFile to fd 2...
	I1213 14:10:25.376098  155456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:10:25.376297  155456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:10:25.376466  155456 out.go:368] Setting JSON to false
	I1213 14:10:25.376490  155456 mustload.go:66] Loading cluster: ha-959286
	I1213 14:10:25.376620  155456 notify.go:221] Checking for updates...
	I1213 14:10:25.376844  155456 config.go:182] Loaded profile config "ha-959286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:10:25.376858  155456 status.go:174] checking status of ha-959286 ...
	I1213 14:10:25.379098  155456 status.go:371] ha-959286 host status = "Running" (err=<nil>)
	I1213 14:10:25.379125  155456 host.go:66] Checking if "ha-959286" exists ...
	I1213 14:10:25.381430  155456 main.go:143] libmachine: domain ha-959286 has defined MAC address 52:54:00:90:1e:a0 in network mk-ha-959286
	I1213 14:10:25.381914  155456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:90:1e:a0", ip: ""} in network mk-ha-959286: {Iface:virbr1 ExpiryTime:2025-12-13 15:04:21 +0000 UTC Type:0 Mac:52:54:00:90:1e:a0 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-959286 Clientid:01:52:54:00:90:1e:a0}
	I1213 14:10:25.381942  155456 main.go:143] libmachine: domain ha-959286 has defined IP address 192.168.39.146 and MAC address 52:54:00:90:1e:a0 in network mk-ha-959286
	I1213 14:10:25.382056  155456 host.go:66] Checking if "ha-959286" exists ...
	I1213 14:10:25.382267  155456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:10:25.384603  155456 main.go:143] libmachine: domain ha-959286 has defined MAC address 52:54:00:90:1e:a0 in network mk-ha-959286
	I1213 14:10:25.384985  155456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:90:1e:a0", ip: ""} in network mk-ha-959286: {Iface:virbr1 ExpiryTime:2025-12-13 15:04:21 +0000 UTC Type:0 Mac:52:54:00:90:1e:a0 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-959286 Clientid:01:52:54:00:90:1e:a0}
	I1213 14:10:25.385017  155456 main.go:143] libmachine: domain ha-959286 has defined IP address 192.168.39.146 and MAC address 52:54:00:90:1e:a0 in network mk-ha-959286
	I1213 14:10:25.385181  155456 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/ha-959286/id_rsa Username:docker}
	I1213 14:10:25.466184  155456 ssh_runner.go:195] Run: systemctl --version
	I1213 14:10:25.472525  155456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 14:10:25.488371  155456 kubeconfig.go:125] found "ha-959286" server: "https://192.168.39.254:8443"
	I1213 14:10:25.488407  155456 api_server.go:166] Checking apiserver status ...
	I1213 14:10:25.488450  155456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:10:25.506097  155456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W1213 14:10:25.517233  155456 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:10:25.517285  155456 ssh_runner.go:195] Run: ls
	I1213 14:10:25.521697  155456 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 14:10:25.526314  155456 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 14:10:25.526333  155456 status.go:463] ha-959286 apiserver status = Running (err=<nil>)
	I1213 14:10:25.526353  155456 status.go:176] ha-959286 status: &{Name:ha-959286 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 14:10:25.526370  155456 status.go:174] checking status of ha-959286-m02 ...
	I1213 14:10:25.527791  155456 status.go:371] ha-959286-m02 host status = "Stopped" (err=<nil>)
	I1213 14:10:25.527810  155456 status.go:384] host is not running, skipping remaining checks
	I1213 14:10:25.527815  155456 status.go:176] ha-959286-m02 status: &{Name:ha-959286-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 14:10:25.527826  155456 status.go:174] checking status of ha-959286-m03 ...
	I1213 14:10:25.529090  155456 status.go:371] ha-959286-m03 host status = "Running" (err=<nil>)
	I1213 14:10:25.529105  155456 host.go:66] Checking if "ha-959286-m03" exists ...
	I1213 14:10:25.531563  155456 main.go:143] libmachine: domain ha-959286-m03 has defined MAC address 52:54:00:7b:ee:a8 in network mk-ha-959286
	I1213 14:10:25.531955  155456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:ee:a8", ip: ""} in network mk-ha-959286: {Iface:virbr1 ExpiryTime:2025-12-13 15:06:15 +0000 UTC Type:0 Mac:52:54:00:7b:ee:a8 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-959286-m03 Clientid:01:52:54:00:7b:ee:a8}
	I1213 14:10:25.531978  155456 main.go:143] libmachine: domain ha-959286-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:7b:ee:a8 in network mk-ha-959286
	I1213 14:10:25.532138  155456 host.go:66] Checking if "ha-959286-m03" exists ...
	I1213 14:10:25.532368  155456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:10:25.534305  155456 main.go:143] libmachine: domain ha-959286-m03 has defined MAC address 52:54:00:7b:ee:a8 in network mk-ha-959286
	I1213 14:10:25.534644  155456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:ee:a8", ip: ""} in network mk-ha-959286: {Iface:virbr1 ExpiryTime:2025-12-13 15:06:15 +0000 UTC Type:0 Mac:52:54:00:7b:ee:a8 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-959286-m03 Clientid:01:52:54:00:7b:ee:a8}
	I1213 14:10:25.534661  155456 main.go:143] libmachine: domain ha-959286-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:7b:ee:a8 in network mk-ha-959286
	I1213 14:10:25.534769  155456 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/ha-959286-m03/id_rsa Username:docker}
	I1213 14:10:25.613461  155456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 14:10:25.630243  155456 kubeconfig.go:125] found "ha-959286" server: "https://192.168.39.254:8443"
	I1213 14:10:25.630270  155456 api_server.go:166] Checking apiserver status ...
	I1213 14:10:25.630302  155456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:10:25.650633  155456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1766/cgroup
	W1213 14:10:25.661210  155456 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1766/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:10:25.661269  155456 ssh_runner.go:195] Run: ls
	I1213 14:10:25.666595  155456 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 14:10:25.671038  155456 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 14:10:25.671085  155456 status.go:463] ha-959286-m03 apiserver status = Running (err=<nil>)
	I1213 14:10:25.671098  155456 status.go:176] ha-959286-m03 status: &{Name:ha-959286-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 14:10:25.671127  155456 status.go:174] checking status of ha-959286-m04 ...
	I1213 14:10:25.672902  155456 status.go:371] ha-959286-m04 host status = "Running" (err=<nil>)
	I1213 14:10:25.672928  155456 host.go:66] Checking if "ha-959286-m04" exists ...
	I1213 14:10:25.675443  155456 main.go:143] libmachine: domain ha-959286-m04 has defined MAC address 52:54:00:13:56:0b in network mk-ha-959286
	I1213 14:10:25.675794  155456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:13:56:0b", ip: ""} in network mk-ha-959286: {Iface:virbr1 ExpiryTime:2025-12-13 15:07:54 +0000 UTC Type:0 Mac:52:54:00:13:56:0b Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-959286-m04 Clientid:01:52:54:00:13:56:0b}
	I1213 14:10:25.675841  155456 main.go:143] libmachine: domain ha-959286-m04 has defined IP address 192.168.39.200 and MAC address 52:54:00:13:56:0b in network mk-ha-959286
	I1213 14:10:25.675980  155456 host.go:66] Checking if "ha-959286-m04" exists ...
	I1213 14:10:25.676177  155456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:10:25.678708  155456 main.go:143] libmachine: domain ha-959286-m04 has defined MAC address 52:54:00:13:56:0b in network mk-ha-959286
	I1213 14:10:25.679249  155456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:13:56:0b", ip: ""} in network mk-ha-959286: {Iface:virbr1 ExpiryTime:2025-12-13 15:07:54 +0000 UTC Type:0 Mac:52:54:00:13:56:0b Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-959286-m04 Clientid:01:52:54:00:13:56:0b}
	I1213 14:10:25.679286  155456 main.go:143] libmachine: domain ha-959286-m04 has defined IP address 192.168.39.200 and MAC address 52:54:00:13:56:0b in network mk-ha-959286
	I1213 14:10:25.679508  155456 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/ha-959286-m04/id_rsa Username:docker}
	I1213 14:10:25.762705  155456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 14:10:25.781688  155456 status.go:176] ha-959286-m04 status: &{Name:ha-959286-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (84.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 node start m02 --alsologtostderr -v 5
E1213 14:10:56.449504  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 node start m02 --alsologtostderr -v 5: (34.600547175s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (348.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 stop --alsologtostderr -v 5
E1213 14:13:12.160460  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:12.585360  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:40.291393  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 stop --alsologtostderr -v 5: (4m2.47666775s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 start --wait true --alsologtostderr -v 5: (1m46.108935917s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (348.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 node delete m03 --alsologtostderr -v 5: (17.577750478s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (258.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 stop --alsologtostderr -v 5
E1213 14:18:12.154280  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:18:12.586346  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:19:35.240258  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 stop --alsologtostderr -v 5: (4m18.911944794s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5: exit status 7 (71.545845ms)

                                                
                                                
-- stdout --
	ha-959286
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-959286-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-959286-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:21:29.053042  159070 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:21:29.053377  159070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:21:29.053390  159070 out.go:374] Setting ErrFile to fd 2...
	I1213 14:21:29.053398  159070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:21:29.053599  159070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:21:29.053813  159070 out.go:368] Setting JSON to false
	I1213 14:21:29.053855  159070 mustload.go:66] Loading cluster: ha-959286
	I1213 14:21:29.053978  159070 notify.go:221] Checking for updates...
	I1213 14:21:29.054367  159070 config.go:182] Loaded profile config "ha-959286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:21:29.054389  159070 status.go:174] checking status of ha-959286 ...
	I1213 14:21:29.056572  159070 status.go:371] ha-959286 host status = "Stopped" (err=<nil>)
	I1213 14:21:29.056590  159070 status.go:384] host is not running, skipping remaining checks
	I1213 14:21:29.056597  159070 status.go:176] ha-959286 status: &{Name:ha-959286 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 14:21:29.056618  159070 status.go:174] checking status of ha-959286-m02 ...
	I1213 14:21:29.057841  159070 status.go:371] ha-959286-m02 host status = "Stopped" (err=<nil>)
	I1213 14:21:29.057857  159070 status.go:384] host is not running, skipping remaining checks
	I1213 14:21:29.057863  159070 status.go:176] ha-959286-m02 status: &{Name:ha-959286-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 14:21:29.057879  159070 status.go:174] checking status of ha-959286-m04 ...
	I1213 14:21:29.059157  159070 status.go:371] ha-959286-m04 host status = "Stopped" (err=<nil>)
	I1213 14:21:29.059178  159070 status.go:384] host is not running, skipping remaining checks
	I1213 14:21:29.059185  159070 status.go:176] ha-959286-m04 status: &{Name:ha-959286-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (258.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (98.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m37.461758698s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (98.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (65.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 node add --control-plane --alsologtostderr -v 5
E1213 14:23:12.154029  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:23:12.585547  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-959286 node add --control-plane --alsologtostderr -v 5: (1m4.583888037s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-959286 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (65.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-691489 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1213 14:24:35.653712  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-691489 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m15.413668531s)
--- PASS: TestJSONOutput/start/Command (75.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-691489 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-691489 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.23s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-691489 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-691489 --output=json --user=testUser: (7.229654191s)
--- PASS: TestJSONOutput/stop/Command (7.23s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-055503 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-055503 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.1641ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fb06220d-28c2-477a-ba03-b9239bccd668","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-055503] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9b1955c-3e4b-4202-91a3-5ee59ea63ded","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"29f3e59a-7587-40d3-9cb7-34900a15936c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2924eca3-5150-4237-82f2-cf1860d8e9ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig"}}
	{"specversion":"1.0","id":"acb1a749-eff2-4b70-a917-6f0185301965","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube"}}
	{"specversion":"1.0","id":"d2bebda4-fd15-4d4e-81ae-7f638c9dced0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"431b857f-2d32-400f-abe5-0687fd4a8a4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"39f63549-15ec-4276-98e1-5c689c2a3af1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-055503" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-055503
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-683894 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-683894 --driver=kvm2  --container-runtime=crio: (33.922165647s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-686159 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-686159 --driver=kvm2  --container-runtime=crio: (36.901641923s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-683894
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-686159
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-686159" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-686159
helpers_test.go:176: Cleaning up "first-683894" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-683894
--- PASS: TestMinikubeProfile (73.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-665112 --memory=3072 --mount-string /tmp/TestMountStartserial3676492369/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-665112 --memory=3072 --mount-string /tmp/TestMountStartserial3676492369/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.159408711s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-665112 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-665112 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-684122 --memory=3072 --mount-string /tmp/TestMountStartserial3676492369/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-684122 --memory=3072 --mount-string /tmp/TestMountStartserial3676492369/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.028636844s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684122 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684122 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-665112 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684122 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684122 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-684122
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-684122: (1.211643564s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-684122
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-684122: (17.287187689s)
--- PASS: TestMountStart/serial/RestartStopped (18.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684122 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684122 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911357 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1213 14:28:12.153870  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:28:12.585757  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-911357 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m34.025892365s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-911357 -- rollout status deployment/busybox: (4.643718189s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-n727d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-z6zgz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-n727d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-z6zgz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-n727d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-z6zgz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-n727d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-n727d -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-z6zgz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911357 -- exec busybox-7b57f96db7-z6zgz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-911357 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-911357 -v=5 --alsologtostderr: (41.175766335s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.60s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-911357 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp testdata/cp-test.txt multinode-911357:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp multinode-911357:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2869450978/001/cp-test_multinode-911357.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp multinode-911357:/home/docker/cp-test.txt multinode-911357-m02:/home/docker/cp-test_multinode-911357_multinode-911357-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m02 "sudo cat /home/docker/cp-test_multinode-911357_multinode-911357-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp multinode-911357:/home/docker/cp-test.txt multinode-911357-m03:/home/docker/cp-test_multinode-911357_multinode-911357-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m03 "sudo cat /home/docker/cp-test_multinode-911357_multinode-911357-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp testdata/cp-test.txt multinode-911357-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp multinode-911357-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2869450978/001/cp-test_multinode-911357-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp multinode-911357-m02:/home/docker/cp-test.txt multinode-911357:/home/docker/cp-test_multinode-911357-m02_multinode-911357.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357 "sudo cat /home/docker/cp-test_multinode-911357-m02_multinode-911357.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp multinode-911357-m02:/home/docker/cp-test.txt multinode-911357-m03:/home/docker/cp-test_multinode-911357-m02_multinode-911357-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m03 "sudo cat /home/docker/cp-test_multinode-911357-m02_multinode-911357-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp testdata/cp-test.txt multinode-911357-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp multinode-911357-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2869450978/001/cp-test_multinode-911357-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp multinode-911357-m03:/home/docker/cp-test.txt multinode-911357:/home/docker/cp-test_multinode-911357-m03_multinode-911357.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357 "sudo cat /home/docker/cp-test_multinode-911357-m03_multinode-911357.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 cp multinode-911357-m03:/home/docker/cp-test.txt multinode-911357-m02:/home/docker/cp-test_multinode-911357-m03_multinode-911357-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 ssh -n multinode-911357-m02 "sudo cat /home/docker/cp-test_multinode-911357-m03_multinode-911357-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-911357 node stop m03: (1.494570549s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-911357 status: exit status 7 (317.060776ms)

                                                
                                                
-- stdout --
	multinode-911357
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-911357-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-911357-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-911357 status --alsologtostderr: exit status 7 (310.445097ms)

                                                
                                                
-- stdout --
	multinode-911357
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-911357-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-911357-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:30:26.351176  164629 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:30:26.351460  164629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:30:26.351472  164629 out.go:374] Setting ErrFile to fd 2...
	I1213 14:30:26.351476  164629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:30:26.351648  164629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:30:26.351808  164629 out.go:368] Setting JSON to false
	I1213 14:30:26.351835  164629 mustload.go:66] Loading cluster: multinode-911357
	I1213 14:30:26.351903  164629 notify.go:221] Checking for updates...
	I1213 14:30:26.352300  164629 config.go:182] Loaded profile config "multinode-911357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:30:26.352321  164629 status.go:174] checking status of multinode-911357 ...
	I1213 14:30:26.354528  164629 status.go:371] multinode-911357 host status = "Running" (err=<nil>)
	I1213 14:30:26.354546  164629 host.go:66] Checking if "multinode-911357" exists ...
	I1213 14:30:26.357053  164629 main.go:143] libmachine: domain multinode-911357 has defined MAC address 52:54:00:6c:38:01 in network mk-multinode-911357
	I1213 14:30:26.357426  164629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:38:01", ip: ""} in network mk-multinode-911357: {Iface:virbr1 ExpiryTime:2025-12-13 15:28:09 +0000 UTC Type:0 Mac:52:54:00:6c:38:01 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:multinode-911357 Clientid:01:52:54:00:6c:38:01}
	I1213 14:30:26.357457  164629 main.go:143] libmachine: domain multinode-911357 has defined IP address 192.168.39.25 and MAC address 52:54:00:6c:38:01 in network mk-multinode-911357
	I1213 14:30:26.357583  164629 host.go:66] Checking if "multinode-911357" exists ...
	I1213 14:30:26.357807  164629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:30:26.360171  164629 main.go:143] libmachine: domain multinode-911357 has defined MAC address 52:54:00:6c:38:01 in network mk-multinode-911357
	I1213 14:30:26.360549  164629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:38:01", ip: ""} in network mk-multinode-911357: {Iface:virbr1 ExpiryTime:2025-12-13 15:28:09 +0000 UTC Type:0 Mac:52:54:00:6c:38:01 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:multinode-911357 Clientid:01:52:54:00:6c:38:01}
	I1213 14:30:26.360588  164629 main.go:143] libmachine: domain multinode-911357 has defined IP address 192.168.39.25 and MAC address 52:54:00:6c:38:01 in network mk-multinode-911357
	I1213 14:30:26.360740  164629 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/multinode-911357/id_rsa Username:docker}
	I1213 14:30:26.443443  164629 ssh_runner.go:195] Run: systemctl --version
	I1213 14:30:26.449791  164629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 14:30:26.465255  164629 kubeconfig.go:125] found "multinode-911357" server: "https://192.168.39.25:8443"
	I1213 14:30:26.465296  164629 api_server.go:166] Checking apiserver status ...
	I1213 14:30:26.465329  164629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:30:26.482941  164629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup
	W1213 14:30:26.492925  164629 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:30:26.492973  164629 ssh_runner.go:195] Run: ls
	I1213 14:30:26.497273  164629 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I1213 14:30:26.501734  164629 api_server.go:279] https://192.168.39.25:8443/healthz returned 200:
	ok
	I1213 14:30:26.501760  164629 status.go:463] multinode-911357 apiserver status = Running (err=<nil>)
	I1213 14:30:26.501775  164629 status.go:176] multinode-911357 status: &{Name:multinode-911357 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 14:30:26.501826  164629 status.go:174] checking status of multinode-911357-m02 ...
	I1213 14:30:26.503323  164629 status.go:371] multinode-911357-m02 host status = "Running" (err=<nil>)
	I1213 14:30:26.503339  164629 host.go:66] Checking if "multinode-911357-m02" exists ...
	I1213 14:30:26.505694  164629 main.go:143] libmachine: domain multinode-911357-m02 has defined MAC address 52:54:00:ca:95:02 in network mk-multinode-911357
	I1213 14:30:26.506105  164629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ca:95:02", ip: ""} in network mk-multinode-911357: {Iface:virbr1 ExpiryTime:2025-12-13 15:29:01 +0000 UTC Type:0 Mac:52:54:00:ca:95:02 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-911357-m02 Clientid:01:52:54:00:ca:95:02}
	I1213 14:30:26.506142  164629 main.go:143] libmachine: domain multinode-911357-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:ca:95:02 in network mk-multinode-911357
	I1213 14:30:26.506274  164629 host.go:66] Checking if "multinode-911357-m02" exists ...
	I1213 14:30:26.506451  164629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:30:26.508357  164629 main.go:143] libmachine: domain multinode-911357-m02 has defined MAC address 52:54:00:ca:95:02 in network mk-multinode-911357
	I1213 14:30:26.508672  164629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ca:95:02", ip: ""} in network mk-multinode-911357: {Iface:virbr1 ExpiryTime:2025-12-13 15:29:01 +0000 UTC Type:0 Mac:52:54:00:ca:95:02 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-911357-m02 Clientid:01:52:54:00:ca:95:02}
	I1213 14:30:26.508697  164629 main.go:143] libmachine: domain multinode-911357-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:ca:95:02 in network mk-multinode-911357
	I1213 14:30:26.508858  164629 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/multinode-911357-m02/id_rsa Username:docker}
	I1213 14:30:26.586039  164629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 14:30:26.600416  164629 status.go:176] multinode-911357-m02 status: &{Name:multinode-911357-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 14:30:26.600453  164629 status.go:174] checking status of multinode-911357-m03 ...
	I1213 14:30:26.602005  164629 status.go:371] multinode-911357-m03 host status = "Stopped" (err=<nil>)
	I1213 14:30:26.602019  164629 status.go:384] host is not running, skipping remaining checks
	I1213 14:30:26.602024  164629 status.go:176] multinode-911357-m03 status: &{Name:multinode-911357-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-911357 node start m03 -v=5 --alsologtostderr: (35.479506965s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (281.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-911357
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-911357
E1213 14:33:12.163057  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:33:12.586004  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-911357: (2m33.81787342s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911357 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-911357 --wait=true -v=5 --alsologtostderr: (2m7.745193252s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-911357
--- PASS: TestMultiNode/serial/RestartKeepsNodes (281.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-911357 node delete m03: (2.029548595s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (160.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 stop
E1213 14:36:15.243933  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:12.162389  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:12.585296  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-911357 stop: (2m40.609286535s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-911357 status: exit status 7 (65.149352ms)

                                                
                                                
-- stdout --
	multinode-911357
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-911357-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-911357 status --alsologtostderr: exit status 7 (61.804115ms)

                                                
                                                
-- stdout --
	multinode-911357
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-911357-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:38:27.448754  166952 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:38:27.449025  166952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:38:27.449034  166952 out.go:374] Setting ErrFile to fd 2...
	I1213 14:38:27.449038  166952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:38:27.449219  166952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:38:27.449373  166952 out.go:368] Setting JSON to false
	I1213 14:38:27.449397  166952 mustload.go:66] Loading cluster: multinode-911357
	I1213 14:38:27.449526  166952 notify.go:221] Checking for updates...
	I1213 14:38:27.449788  166952 config.go:182] Loaded profile config "multinode-911357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:38:27.449806  166952 status.go:174] checking status of multinode-911357 ...
	I1213 14:38:27.451960  166952 status.go:371] multinode-911357 host status = "Stopped" (err=<nil>)
	I1213 14:38:27.451974  166952 status.go:384] host is not running, skipping remaining checks
	I1213 14:38:27.451979  166952 status.go:176] multinode-911357 status: &{Name:multinode-911357 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 14:38:27.451993  166952 status.go:174] checking status of multinode-911357-m02 ...
	I1213 14:38:27.452973  166952 status.go:371] multinode-911357-m02 host status = "Stopped" (err=<nil>)
	I1213 14:38:27.452986  166952 status.go:384] host is not running, skipping remaining checks
	I1213 14:38:27.452990  166952 status.go:176] multinode-911357-m02 status: &{Name:multinode-911357-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (160.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (90s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911357 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-911357 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m29.555562963s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911357 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (90.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-911357
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911357-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-911357-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (71.911294ms)

                                                
                                                
-- stdout --
	* [multinode-911357-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-911357-m02' is duplicated with machine name 'multinode-911357-m02' in profile 'multinode-911357'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911357-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-911357-m03 --driver=kvm2  --container-runtime=crio: (36.199280833s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-911357
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-911357: exit status 80 (194.172389ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-911357 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-911357-m03 already exists in multinode-911357-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-911357-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.36s)

                                                
                                    
x
+
TestScheduledStopUnix (107.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-305528 --memory=3072 --driver=kvm2  --container-runtime=crio
E1213 14:43:12.162864  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:43:12.585761  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-305528 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.522429284s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-305528 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 14:43:35.692726  169317 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:43:35.692987  169317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:43:35.692996  169317 out.go:374] Setting ErrFile to fd 2...
	I1213 14:43:35.692999  169317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:43:35.693218  169317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:43:35.693443  169317 out.go:368] Setting JSON to false
	I1213 14:43:35.693521  169317 mustload.go:66] Loading cluster: scheduled-stop-305528
	I1213 14:43:35.693931  169317 config.go:182] Loaded profile config "scheduled-stop-305528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:43:35.694095  169317 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/config.json ...
	I1213 14:43:35.694346  169317 mustload.go:66] Loading cluster: scheduled-stop-305528
	I1213 14:43:35.694504  169317 config.go:182] Loaded profile config "scheduled-stop-305528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-305528 -n scheduled-stop-305528
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-305528 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 14:43:35.974747  169362 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:43:35.975009  169362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:43:35.975017  169362 out.go:374] Setting ErrFile to fd 2...
	I1213 14:43:35.975021  169362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:43:35.975289  169362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:43:35.975643  169362 out.go:368] Setting JSON to false
	I1213 14:43:35.975892  169362 daemonize_unix.go:73] killing process 169351 as it is an old scheduled stop
	I1213 14:43:35.976006  169362 mustload.go:66] Loading cluster: scheduled-stop-305528
	I1213 14:43:35.976447  169362 config.go:182] Loaded profile config "scheduled-stop-305528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:43:35.976537  169362 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/config.json ...
	I1213 14:43:35.976761  169362 mustload.go:66] Loading cluster: scheduled-stop-305528
	I1213 14:43:35.976880  169362 config.go:182] Loaded profile config "scheduled-stop-305528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:180: process 169351 is a zombie
I1213 14:43:35.982618  135234 retry.go:31] will retry after 87.214µs: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:35.983781  135234 retry.go:31] will retry after 198.936µs: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:35.984899  135234 retry.go:31] will retry after 272.81µs: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:35.986040  135234 retry.go:31] will retry after 324.314µs: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:35.987182  135234 retry.go:31] will retry after 336.947µs: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:35.988308  135234 retry.go:31] will retry after 1.135488ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:35.990502  135234 retry.go:31] will retry after 1.121131ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:35.992695  135234 retry.go:31] will retry after 2.06717ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:35.994856  135234 retry.go:31] will retry after 2.1347ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:35.998069  135234 retry.go:31] will retry after 2.164652ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:36.001276  135234 retry.go:31] will retry after 6.323655ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:36.008432  135234 retry.go:31] will retry after 10.570001ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:36.019628  135234 retry.go:31] will retry after 16.86216ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:36.036880  135234 retry.go:31] will retry after 26.041474ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:36.063083  135234 retry.go:31] will retry after 23.732041ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
I1213 14:43:36.087386  135234 retry.go:31] will retry after 34.095925ms: open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-305528 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-305528 -n scheduled-stop-305528
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-305528
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-305528 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 14:44:01.708732  169511 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:44:01.709002  169511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:44:01.709011  169511 out.go:374] Setting ErrFile to fd 2...
	I1213 14:44:01.709015  169511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:44:01.709252  169511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:44:01.709515  169511 out.go:368] Setting JSON to false
	I1213 14:44:01.709595  169511 mustload.go:66] Loading cluster: scheduled-stop-305528
	I1213 14:44:01.709948  169511 config.go:182] Loaded profile config "scheduled-stop-305528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:44:01.710015  169511 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/scheduled-stop-305528/config.json ...
	I1213 14:44:01.710239  169511 mustload.go:66] Loading cluster: scheduled-stop-305528
	I1213 14:44:01.710404  169511 config.go:182] Loaded profile config "scheduled-stop-305528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-305528
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-305528: exit status 7 (62.542021ms)

                                                
                                                
-- stdout --
	scheduled-stop-305528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-305528 -n scheduled-stop-305528
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-305528 -n scheduled-stop-305528: exit status 7 (60.053423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-305528" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-305528
--- PASS: TestScheduledStopUnix (107.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (409.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3514764853 start -p running-upgrade-352355 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3514764853 start -p running-upgrade-352355 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m32.383815187s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-352355 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-352355 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m12.258388463s)
helpers_test.go:176: Cleaning up "running-upgrade-352355" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-352355
--- PASS: TestRunningBinaryUpgrade (409.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (97.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-728162 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-728162 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.086802874s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-728162
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-728162: (2.178040208s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-728162 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-728162 status --format={{.Host}}: exit status 7 (76.936145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-728162 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-728162 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.545674361s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-728162 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-728162 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-728162 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.442493ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-728162] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-728162
	    minikube start -p kubernetes-upgrade-728162 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7281622 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-728162 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-728162 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-728162 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (12.42211064s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-728162" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-728162
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-728162: (1.067299603s)
--- PASS: TestKubernetesUpgrade (97.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-303609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-303609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (93.312412ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-303609] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestPause/serial/Start (98.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-711635 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-711635 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m38.978421901s)
--- PASS: TestPause/serial/Start (98.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (76.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-303609 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-303609 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.363038927s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-303609 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (76.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-303609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-303609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (4.298963684s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-303609 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-303609 status -o json: exit status 2 (232.655026ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-303609","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-303609
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-303609: (1.567870221s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-303609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-303609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (19.732395678s)
--- PASS: TestNoKubernetes/serial/Start (19.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (85.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3228034794 start -p stopped-upgrade-729395 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3228034794 start -p stopped-upgrade-729395 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (44.390484483s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3228034794 -p stopped-upgrade-729395 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3228034794 -p stopped-upgrade-729395 stop: (1.835619722s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-729395 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-729395 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.998027022s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (85.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-303609 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-303609 "sudo systemctl is-active --quiet service kubelet": exit status 1 (165.627061ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-303609
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-303609: (1.312037228s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-303609 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-303609 --driver=kvm2  --container-runtime=crio: (43.053161918s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-303609 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-303609 "sudo systemctl is-active --quiet service kubelet": exit status 1 (189.502928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-729395
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-729395: (1.044561591s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestISOImage/Setup (22.05s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-752451 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-752451 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.046309706s)
--- PASS: TestISOImage/Setup (22.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-590122 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-590122 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (129.655856ms)

                                                
                                                
-- stdout --
	* [false-590122] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:47:49.650326  173198 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:47:49.650445  173198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:47:49.650457  173198 out.go:374] Setting ErrFile to fd 2...
	I1213 14:47:49.650463  173198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:47:49.650746  173198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
	I1213 14:47:49.651245  173198 out.go:368] Setting JSON to false
	I1213 14:47:49.652166  173198 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9010,"bootTime":1765628260,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 14:47:49.652229  173198 start.go:143] virtualization: kvm guest
	I1213 14:47:49.654196  173198 out.go:179] * [false-590122] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 14:47:49.655566  173198 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:47:49.655559  173198 notify.go:221] Checking for updates...
	I1213 14:47:49.656939  173198 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:47:49.658642  173198 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
	I1213 14:47:49.659740  173198 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
	I1213 14:47:49.660889  173198 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 14:47:49.665563  173198 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:47:49.667445  173198 config.go:182] Loaded profile config "force-systemd-env-936726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 14:47:49.667556  173198 config.go:182] Loaded profile config "guest-752451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1213 14:47:49.667633  173198 config.go:182] Loaded profile config "running-upgrade-352355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 14:47:49.667722  173198 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:47:49.701899  173198 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 14:47:49.702997  173198 start.go:309] selected driver: kvm2
	I1213 14:47:49.703011  173198 start.go:927] validating driver "kvm2" against <nil>
	I1213 14:47:49.703034  173198 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:47:49.708498  173198 out.go:203] 
	W1213 14:47:49.709591  173198 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1213 14:47:49.710667  173198 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-590122 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-590122" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 14:46:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.235:8443
name: running-upgrade-352355
contexts:
- context:
cluster: running-upgrade-352355
user: running-upgrade-352355
name: running-upgrade-352355
current-context: ""
kind: Config
users:
- name: running-upgrade-352355
user:
client-certificate: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/running-upgrade-352355/client.crt
client-key: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/running-upgrade-352355/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-590122

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-590122"

                                                
                                                
----------------------- debugLogs end: false-590122 [took: 4.031379028s] --------------------------------
helpers_test.go:176: Cleaning up "false-590122" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-590122
--- PASS: TestNetworkPlugins/group/false (4.35s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (83.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-203738 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-203738 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m23.930569017s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (83.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-874954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-874954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m26.532263355s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-203738 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [69ee6809-9568-4353-8e63-06ea589f064c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [69ee6809-9568-4353-8e63-06ea589f064c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003876692s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-203738 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-203738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-203738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126511242s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-203738 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (83.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-203738 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-203738 --alsologtostderr -v=3: (1m23.697761194s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (83.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-537881 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-537881 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m14.527136048s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-874954 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [8059da89-5133-4743-a50b-358a99829ee7] Pending
helpers_test.go:353: "busybox" [8059da89-5133-4743-a50b-358a99829ee7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [8059da89-5133-4743-a50b-358a99829ee7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004340306s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-874954 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-874954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-874954 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (82.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-874954 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-874954 --alsologtostderr -v=3: (1m22.550322152s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (82.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-203738 -n old-k8s-version-203738
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-203738 -n old-k8s-version-203738: exit status 7 (68.03157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-203738 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-203738 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-203738 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (43.667574683s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-203738 -n old-k8s-version-203738
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-141241 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-141241 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m38.761463194s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-537881 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [869dbf0f-9785-41e1-8a2b-e7e191bba7d7] Pending
helpers_test.go:353: "busybox" [869dbf0f-9785-41e1-8a2b-e7e191bba7d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1213 14:52:55.246282  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [869dbf0f-9785-41e1-8a2b-e7e191bba7d7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.00290979s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-537881 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-537881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-537881 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (88.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-537881 --alsologtostderr -v=3
E1213 14:53:12.153722  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:53:12.585902  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-537881 --alsologtostderr -v=3: (1m28.549611437s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (88.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-9tts7" [44ab621d-7877-4bfb-85f0-ddea445ef485] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-9tts7" [44ab621d-7877-4bfb-85f0-ddea445ef485] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004240485s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-9tts7" [44ab621d-7877-4bfb-85f0-ddea445ef485] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004079189s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-203738 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-203738 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-203738 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-203738 -n old-k8s-version-203738
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-203738 -n old-k8s-version-203738: exit status 2 (205.129155ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-203738 -n old-k8s-version-203738
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-203738 -n old-k8s-version-203738: exit status 2 (202.138267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-203738 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-203738 -n old-k8s-version-203738
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-203738 -n old-k8s-version-203738
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-859195 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-859195 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (38.20033087s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-874954 -n no-preload-874954
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-874954 -n no-preload-874954: exit status 7 (70.108071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-874954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (57.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-874954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-874954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (57.215867567s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-874954 -n no-preload-874954
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (57.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-141241 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [701d3371-6c28-4afe-bba0-151894d499fb] Pending
helpers_test.go:353: "busybox" [701d3371-6c28-4afe-bba0-151894d499fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [701d3371-6c28-4afe-bba0-151894d499fb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004173676s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-141241 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-859195 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-859195 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.296135859s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (80.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-859195 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-859195 --alsologtostderr -v=3: (1m20.085238882s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (80.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-141241 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-141241 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (85.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-141241 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-141241 --alsologtostderr -v=3: (1m25.49962846s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (85.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-537881 -n embed-certs-537881
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-537881 -n embed-certs-537881: exit status 7 (72.122233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-537881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-537881 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-537881 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (44.203323968s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-537881 -n embed-certs-537881
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fwth2" [2edff8d9-2dc0-4ec3-a081-3f84d9a916c6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fwth2" [2edff8d9-2dc0-4ec3-a081-3f84d9a916c6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004008728s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fwth2" [2edff8d9-2dc0-4ec3-a081-3f84d9a916c6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004144979s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-874954 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-874954 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-874954 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-874954 -n no-preload-874954
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-874954 -n no-preload-874954: exit status 2 (218.612404ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-874954 -n no-preload-874954
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-874954 -n no-preload-874954: exit status 2 (222.697236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-874954 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-874954 -n no-preload-874954
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-874954 -n no-preload-874954
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m25.903296718s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-bgj87" [5408bbd0-ccf6-4abe-b14e-e11d7a8db57d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-bgj87" [5408bbd0-ccf6-4abe-b14e-e11d7a8db57d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004845543s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-bgj87" [5408bbd0-ccf6-4abe-b14e-e11d7a8db57d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005038826s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-537881 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-859195 -n newest-cni-859195
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-859195 -n newest-cni-859195: exit status 7 (62.133489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-859195 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-859195 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-859195 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (30.610304122s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-859195 -n newest-cni-859195
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-537881 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-537881 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-537881 -n embed-certs-537881
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-537881 -n embed-certs-537881: exit status 2 (241.544216ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-537881 -n embed-certs-537881
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-537881 -n embed-certs-537881: exit status 2 (233.888825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-537881 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-537881 -n embed-certs-537881
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-537881 -n embed-certs-537881
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m9.121102823s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-141241 -n default-k8s-diff-port-141241
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-141241 -n default-k8s-diff-port-141241: exit status 7 (62.200692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-141241 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-141241 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1213 14:55:53.863395  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:53.869850  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:53.881347  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:53.902850  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:53.944291  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:54.025838  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:54.187397  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:54.509634  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:55.151589  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:56.433559  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:58.995606  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:56:04.117725  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-141241 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m3.170064883s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-141241 -n default-k8s-diff-port-141241
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-859195 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-859195 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-859195 --alsologtostderr -v=1: (1.085540475s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-859195 -n newest-cni-859195
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-859195 -n newest-cni-859195: exit status 2 (297.955814ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-859195 -n newest-cni-859195
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-859195 -n newest-cni-859195: exit status 2 (286.520172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-859195 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-859195 -n newest-cni-859195
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-859195 -n newest-cni-859195
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1213 14:56:14.359838  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m38.729919151s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-590122 "pgrep -a kubelet"
I1213 14:56:25.926832  135234 config.go:182] Loaded profile config "auto-590122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-590122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-d7npx" [7c33e13c-3f20-4429-b5e5-db74a70562a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-d7npx" [7c33e13c-3f20-4429-b5e5-db74a70562a7] Running
E1213 14:56:34.841770  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.003962407s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-590122 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-7jdls" [f73ae4d3-c51f-4517-baaa-f15c8554d366] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004070511s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-57xll" [a1769449-cf27-4e86-8226-83aa26c6f323] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004208135s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-590122 "pgrep -a kubelet"
I1213 14:56:55.323687  135234 config.go:182] Loaded profile config "kindnet-590122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-590122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tx6bg" [de1a308a-37df-4e6e-9b53-851993666c08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tx6bg" [de1a308a-37df-4e6e-9b53-851993666c08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.008003432s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m19.350226171s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-57xll" [a1769449-cf27-4e86-8226-83aa26c6f323] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00351496s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-141241 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-141241 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-141241 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-141241 -n default-k8s-diff-port-141241
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-141241 -n default-k8s-diff-port-141241: exit status 2 (245.784258ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-141241 -n default-k8s-diff-port-141241
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-141241 -n default-k8s-diff-port-141241: exit status 2 (226.612896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-141241 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-141241 -n default-k8s-diff-port-141241
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-141241 -n default-k8s-diff-port-141241
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-590122 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1213 14:57:10.329746  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:10.336120  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:10.347528  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:10.369428  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:10.410856  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:10.492416  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:10.654022  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:10.975427  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:11.617038  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:12.898478  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:15.460173  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:15.803065  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/old-k8s-version-203738/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:20.581965  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m36.509852699s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1213 14:57:30.824132  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m31.447755125s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-g98g9" [0d08588c-4cac-48a0-8d40-3823dba0a426] Running
E1213 14:57:51.306505  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/no-preload-874954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:57:55.657390  135234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/functional-359736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004660116s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-590122 "pgrep -a kubelet"
I1213 14:57:56.395792  135234 config.go:182] Loaded profile config "calico-590122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-590122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vkxjx" [984792fb-45f2-448b-9eff-6d7366a60af0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-vkxjx" [984792fb-45f2-448b-9eff-6d7366a60af0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003687662s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-590122 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-590122 "pgrep -a kubelet"
I1213 14:58:16.228244  135234 config.go:182] Loaded profile config "custom-flannel-590122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-590122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-s6d52" [434f1e2d-4b0c-4d42-9e34-3143e3b967aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-s6d52" [434f1e2d-4b0c-4d42-9e34-3143e3b967aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00389325s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-590122 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m18.043136366s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-590122 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
I1213 14:58:45.125925  135234 config.go:182] Loaded profile config "bridge-590122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-590122 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-590122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-82rbz" [ffbe00a3-f657-4676-a2af-588546fd6cdb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-82rbz" [ffbe00a3-f657-4676-a2af-588546fd6cdb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005664226s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.18s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: 89f69959280ebeefd164cfeba1f5b84c6f004bc9
iso_test.go:118:   iso_version: v1.37.0-1765613186-22122
iso_test.go:118:   kicbase_version: v0.0.48-1765275396-22083
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-752451 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-g97sh" [e237c794-8ba5-4486-9a69-e84dd972eb7d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005599663s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-590122 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-590122 "pgrep -a kubelet"
I1213 14:58:59.299574  135234 config.go:182] Loaded profile config "flannel-590122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-590122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-frlp4" [3a45e0b1-ef36-415e-a0f1-77c6d89c3681] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-frlp4" [3a45e0b1-ef36-415e-a0f1-77c6d89c3681] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003696559s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-590122 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-590122 "pgrep -a kubelet"
I1213 14:59:42.443100  135234 config.go:182] Loaded profile config "enable-default-cni-590122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-590122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vkdsg" [f5a7eed3-49c4-4723-b829-b28d84c942a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-vkdsg" [f5a7eed3-49c4-4723-b829-b28d84c942a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004294066s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-590122 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-590122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    

Test skip (42/370)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.29
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
139 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
140 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
158 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
159 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
160 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
161 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
162 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
163 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
164 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
165 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
188 TestGvisorAddon 0
210 TestImageBuild 0
238 TestKicCustomNetwork 0
239 TestKicExistingNetwork 0
240 TestKicCustomSubnet 0
241 TestKicStaticIP 0
273 TestChangeNoneUser 0
276 TestScheduledStopWindows 0
278 TestSkaffold 0
280 TestInsufficientStorage 0
284 TestMissingContainerUpgrade 0
292 TestStartStop/group/disable-driver-mounts 0.17
311 TestNetworkPlugins/group/kubenet 3.95
320 TestNetworkPlugins/group/cilium 5.64
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-685870 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-869336" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-869336
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-590122 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-590122" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 14:47:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.50:8443
name: pause-711635
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 14:46:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.235:8443
name: running-upgrade-352355
contexts:
- context:
cluster: pause-711635
extensions:
- extension:
last-update: Sat, 13 Dec 2025 14:47:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-711635
name: pause-711635
- context:
cluster: running-upgrade-352355
user: running-upgrade-352355
name: running-upgrade-352355
current-context: ""
kind: Config
users:
- name: pause-711635
user:
client-certificate: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/client.crt
client-key: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/pause-711635/client.key
- name: running-upgrade-352355
user:
client-certificate: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/running-upgrade-352355/client.crt
client-key: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/running-upgrade-352355/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-590122

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-590122"

                                                
                                                
----------------------- debugLogs end: kubenet-590122 [took: 3.762056678s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-590122" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-590122
--- SKIP: TestNetworkPlugins/group/kubenet (3.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-590122 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-590122" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 14:46:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.235:8443
name: running-upgrade-352355
contexts:
- context:
cluster: running-upgrade-352355
user: running-upgrade-352355
name: running-upgrade-352355
current-context: ""
kind: Config
users:
- name: running-upgrade-352355
user:
client-certificate: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/running-upgrade-352355/client.crt
client-key: /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/running-upgrade-352355/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-590122

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-590122" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590122"

                                                
                                                
----------------------- debugLogs end: cilium-590122 [took: 5.426334266s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-590122" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-590122
--- SKIP: TestNetworkPlugins/group/cilium (5.64s)

                                                
                                    
Copied to clipboard