Test Report: KVM_Linux_crio 21503

                    
                      0729d8e142017243e3350a16dd07e8c0c152f883:2025-09-08:41331
                    
                

Test fail (7/329)

x
+
TestAddons/parallel/Ingress (162.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-451875 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-451875 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-451875 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [faee7926-dfb9-4e96-b158-707d01e57f27] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [faee7926-dfb9-4e96-b158-707d01e57f27] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.003979318s
I0908 10:34:38.046955  752332 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-451875 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.383077558s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-451875 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.92
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-451875 -n addons-451875
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-451875 logs -n 25: (1.375455068s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-049029                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-049029 │ jenkins │ v1.36.0 │ 08 Sep 25 10:30 UTC │ 08 Sep 25 10:30 UTC │
	│ start   │ --download-only -p binary-mirror-286578 --alsologtostderr --binary-mirror http://127.0.0.1:39233 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-286578 │ jenkins │ v1.36.0 │ 08 Sep 25 10:30 UTC │                     │
	│ delete  │ -p binary-mirror-286578                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-286578 │ jenkins │ v1.36.0 │ 08 Sep 25 10:30 UTC │ 08 Sep 25 10:30 UTC │
	│ addons  │ enable dashboard -p addons-451875                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:30 UTC │                     │
	│ addons  │ disable dashboard -p addons-451875                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:30 UTC │                     │
	│ start   │ -p addons-451875 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:30 UTC │ 08 Sep 25 10:33 UTC │
	│ addons  │ addons-451875 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:33 UTC │ 08 Sep 25 10:33 UTC │
	│ addons  │ addons-451875 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:33 UTC │ 08 Sep 25 10:33 UTC │
	│ addons  │ enable headlamp -p addons-451875 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:33 UTC │ 08 Sep 25 10:33 UTC │
	│ addons  │ addons-451875 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:33 UTC │ 08 Sep 25 10:34 UTC │
	│ addons  │ addons-451875 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:33 UTC │ 08 Sep 25 10:34 UTC │
	│ addons  │ addons-451875 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ addons  │ addons-451875 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ addons  │ addons-451875 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ ip      │ addons-451875 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ addons  │ addons-451875 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ ssh     │ addons-451875 ssh cat /opt/local-path-provisioner/pvc-2a2fc39d-914a-4def-bafb-67a8b986f998_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ addons  │ addons-451875 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:35 UTC │
	│ addons  │ addons-451875 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-451875                                                                                                                                                                                                                                                                                                                                                                                         │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ addons  │ addons-451875 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ ssh     │ addons-451875 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │                     │
	│ addons  │ addons-451875 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:35 UTC │ 08 Sep 25 10:35 UTC │
	│ addons  │ addons-451875 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:35 UTC │ 08 Sep 25 10:35 UTC │
	│ ip      │ addons-451875 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-451875        │ jenkins │ v1.36.0 │ 08 Sep 25 10:36 UTC │ 08 Sep 25 10:36 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:30:01
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:30:01.014432  753065 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:30:01.014707  753065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:30:01.014719  753065 out.go:374] Setting ErrFile to fd 2...
	I0908 10:30:01.014723  753065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:30:01.014905  753065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 10:30:01.015500  753065 out.go:368] Setting JSON to false
	I0908 10:30:01.016429  753065 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":69117,"bootTime":1757258284,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:30:01.016529  753065 start.go:140] virtualization: kvm guest
	I0908 10:30:01.018472  753065 out.go:179] * [addons-451875] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:30:01.019620  753065 notify.go:220] Checking for updates...
	I0908 10:30:01.019648  753065 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 10:30:01.020839  753065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:30:01.022007  753065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 10:30:01.023145  753065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 10:30:01.024400  753065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 10:30:01.025592  753065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 10:30:01.026758  753065 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:30:01.058212  753065 out.go:179] * Using the kvm2 driver based on user configuration
	I0908 10:30:01.059422  753065 start.go:304] selected driver: kvm2
	I0908 10:30:01.059438  753065 start.go:918] validating driver "kvm2" against <nil>
	I0908 10:30:01.059449  753065 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 10:30:01.060183  753065 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 10:30:01.060245  753065 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21503-748170/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 10:30:01.075181  753065 install.go:137] /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 10:30:01.075223  753065 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 10:30:01.075452  753065 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 10:30:01.075483  753065 cni.go:84] Creating CNI manager for ""
	I0908 10:30:01.075525  753065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 10:30:01.075534  753065 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 10:30:01.075586  753065 start.go:348] cluster config:
	{Name:addons-451875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-451875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0908 10:30:01.075669  753065 iso.go:125] acquiring lock: {Name:mk013a3bcd14eba8870ec8e08630600588ab11c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 10:30:01.077124  753065 out.go:179] * Starting "addons-451875" primary control-plane node in "addons-451875" cluster
	I0908 10:30:01.078060  753065 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 10:30:01.078101  753065 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 10:30:01.078110  753065 cache.go:58] Caching tarball of preloaded images
	I0908 10:30:01.078181  753065 preload.go:172] Found /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 10:30:01.078190  753065 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 10:30:01.078500  753065 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/config.json ...
	I0908 10:30:01.078522  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/config.json: {Name:mka36df58a94219c3b4c2eee852941a4666bfc6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:01.078662  753065 start.go:360] acquireMachinesLock for addons-451875: {Name:mkc620e3900da426b9c156141af1783a234a8bd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 10:30:01.078708  753065 start.go:364] duration metric: took 33.55µs to acquireMachinesLock for "addons-451875"
	I0908 10:30:01.078729  753065 start.go:93] Provisioning new machine with config: &{Name:addons-451875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:addons-451875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 10:30:01.078772  753065 start.go:125] createHost starting for "" (driver="kvm2")
	I0908 10:30:01.080917  753065 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0908 10:30:01.081032  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:01.081074  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:01.095524  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0908 10:30:01.096063  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:01.096641  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:01.096662  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:01.097062  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:01.097284  753065 main.go:141] libmachine: (addons-451875) Calling .GetMachineName
	I0908 10:30:01.097446  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:01.097620  753065 start.go:159] libmachine.API.Create for "addons-451875" (driver="kvm2")
	I0908 10:30:01.097654  753065 client.go:168] LocalClient.Create starting
	I0908 10:30:01.097700  753065 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem
	I0908 10:30:01.193624  753065 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem
	I0908 10:30:01.781818  753065 main.go:141] libmachine: Running pre-create checks...
	I0908 10:30:01.781844  753065 main.go:141] libmachine: (addons-451875) Calling .PreCreateCheck
	I0908 10:30:01.782363  753065 main.go:141] libmachine: (addons-451875) Calling .GetConfigRaw
	I0908 10:30:01.782917  753065 main.go:141] libmachine: Creating machine...
	I0908 10:30:01.782936  753065 main.go:141] libmachine: (addons-451875) Calling .Create
	I0908 10:30:01.783143  753065 main.go:141] libmachine: (addons-451875) creating KVM machine...
	I0908 10:30:01.783168  753065 main.go:141] libmachine: (addons-451875) creating network...
	I0908 10:30:01.784538  753065 main.go:141] libmachine: (addons-451875) DBG | found existing default KVM network
	I0908 10:30:01.785107  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:01.784950  753088 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000208dd0}
	I0908 10:30:01.785131  753065 main.go:141] libmachine: (addons-451875) DBG | created network xml: 
	I0908 10:30:01.785140  753065 main.go:141] libmachine: (addons-451875) DBG | <network>
	I0908 10:30:01.785155  753065 main.go:141] libmachine: (addons-451875) DBG |   <name>mk-addons-451875</name>
	I0908 10:30:01.785164  753065 main.go:141] libmachine: (addons-451875) DBG |   <dns enable='no'/>
	I0908 10:30:01.785172  753065 main.go:141] libmachine: (addons-451875) DBG |   
	I0908 10:30:01.785183  753065 main.go:141] libmachine: (addons-451875) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0908 10:30:01.785204  753065 main.go:141] libmachine: (addons-451875) DBG |     <dhcp>
	I0908 10:30:01.785245  753065 main.go:141] libmachine: (addons-451875) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0908 10:30:01.785269  753065 main.go:141] libmachine: (addons-451875) DBG |     </dhcp>
	I0908 10:30:01.785279  753065 main.go:141] libmachine: (addons-451875) DBG |   </ip>
	I0908 10:30:01.785313  753065 main.go:141] libmachine: (addons-451875) DBG |   
	I0908 10:30:01.785339  753065 main.go:141] libmachine: (addons-451875) DBG | </network>
	I0908 10:30:01.785356  753065 main.go:141] libmachine: (addons-451875) DBG | 
	I0908 10:30:01.790233  753065 main.go:141] libmachine: (addons-451875) DBG | trying to create private KVM network mk-addons-451875 192.168.39.0/24...
	I0908 10:30:01.859039  753065 main.go:141] libmachine: (addons-451875) setting up store path in /home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875 ...
	I0908 10:30:01.859084  753065 main.go:141] libmachine: (addons-451875) building disk image from file:///home/jenkins/minikube-integration/21503-748170/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 10:30:01.859097  753065 main.go:141] libmachine: (addons-451875) DBG | private KVM network mk-addons-451875 192.168.39.0/24 created
	I0908 10:30:01.859114  753065 main.go:141] libmachine: (addons-451875) Downloading /home/jenkins/minikube-integration/21503-748170/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21503-748170/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 10:30:01.859137  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:01.858956  753088 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 10:30:02.160228  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:02.160060  753088 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa...
	I0908 10:30:02.323936  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:02.323808  753088 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/addons-451875.rawdisk...
	I0908 10:30:02.323969  753065 main.go:141] libmachine: (addons-451875) DBG | Writing magic tar header
	I0908 10:30:02.323983  753065 main.go:141] libmachine: (addons-451875) DBG | Writing SSH key tar header
	I0908 10:30:02.323997  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:02.323940  753088 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875 ...
	I0908 10:30:02.324013  753065 main.go:141] libmachine: (addons-451875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875
	I0908 10:30:02.324087  753065 main.go:141] libmachine: (addons-451875) setting executable bit set on /home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875 (perms=drwx------)
	I0908 10:30:02.324114  753065 main.go:141] libmachine: (addons-451875) setting executable bit set on /home/jenkins/minikube-integration/21503-748170/.minikube/machines (perms=drwxr-xr-x)
	I0908 10:30:02.324129  753065 main.go:141] libmachine: (addons-451875) setting executable bit set on /home/jenkins/minikube-integration/21503-748170/.minikube (perms=drwxr-xr-x)
	I0908 10:30:02.324139  753065 main.go:141] libmachine: (addons-451875) setting executable bit set on /home/jenkins/minikube-integration/21503-748170 (perms=drwxrwxr-x)
	I0908 10:30:02.324149  753065 main.go:141] libmachine: (addons-451875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21503-748170/.minikube/machines
	I0908 10:30:02.324161  753065 main.go:141] libmachine: (addons-451875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 10:30:02.324168  753065 main.go:141] libmachine: (addons-451875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21503-748170
	I0908 10:30:02.324179  753065 main.go:141] libmachine: (addons-451875) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0908 10:30:02.324186  753065 main.go:141] libmachine: (addons-451875) DBG | checking permissions on dir: /home/jenkins
	I0908 10:30:02.324196  753065 main.go:141] libmachine: (addons-451875) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0908 10:30:02.324214  753065 main.go:141] libmachine: (addons-451875) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0908 10:30:02.324234  753065 main.go:141] libmachine: (addons-451875) DBG | checking permissions on dir: /home
	I0908 10:30:02.324241  753065 main.go:141] libmachine: (addons-451875) creating domain...
	I0908 10:30:02.324251  753065 main.go:141] libmachine: (addons-451875) DBG | skipping /home - not owner
	I0908 10:30:02.325497  753065 main.go:141] libmachine: (addons-451875) define libvirt domain using xml: 
	I0908 10:30:02.325537  753065 main.go:141] libmachine: (addons-451875) <domain type='kvm'>
	I0908 10:30:02.325546  753065 main.go:141] libmachine: (addons-451875)   <name>addons-451875</name>
	I0908 10:30:02.325551  753065 main.go:141] libmachine: (addons-451875)   <memory unit='MiB'>4096</memory>
	I0908 10:30:02.325556  753065 main.go:141] libmachine: (addons-451875)   <vcpu>2</vcpu>
	I0908 10:30:02.325563  753065 main.go:141] libmachine: (addons-451875)   <features>
	I0908 10:30:02.325568  753065 main.go:141] libmachine: (addons-451875)     <acpi/>
	I0908 10:30:02.325575  753065 main.go:141] libmachine: (addons-451875)     <apic/>
	I0908 10:30:02.325579  753065 main.go:141] libmachine: (addons-451875)     <pae/>
	I0908 10:30:02.325585  753065 main.go:141] libmachine: (addons-451875)     
	I0908 10:30:02.325590  753065 main.go:141] libmachine: (addons-451875)   </features>
	I0908 10:30:02.325597  753065 main.go:141] libmachine: (addons-451875)   <cpu mode='host-passthrough'>
	I0908 10:30:02.325619  753065 main.go:141] libmachine: (addons-451875)   
	I0908 10:30:02.325627  753065 main.go:141] libmachine: (addons-451875)   </cpu>
	I0908 10:30:02.325632  753065 main.go:141] libmachine: (addons-451875)   <os>
	I0908 10:30:02.325636  753065 main.go:141] libmachine: (addons-451875)     <type>hvm</type>
	I0908 10:30:02.325641  753065 main.go:141] libmachine: (addons-451875)     <boot dev='cdrom'/>
	I0908 10:30:02.325645  753065 main.go:141] libmachine: (addons-451875)     <boot dev='hd'/>
	I0908 10:30:02.325650  753065 main.go:141] libmachine: (addons-451875)     <bootmenu enable='no'/>
	I0908 10:30:02.325657  753065 main.go:141] libmachine: (addons-451875)   </os>
	I0908 10:30:02.325661  753065 main.go:141] libmachine: (addons-451875)   <devices>
	I0908 10:30:02.325668  753065 main.go:141] libmachine: (addons-451875)     <disk type='file' device='cdrom'>
	I0908 10:30:02.325676  753065 main.go:141] libmachine: (addons-451875)       <source file='/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/boot2docker.iso'/>
	I0908 10:30:02.325683  753065 main.go:141] libmachine: (addons-451875)       <target dev='hdc' bus='scsi'/>
	I0908 10:30:02.325688  753065 main.go:141] libmachine: (addons-451875)       <readonly/>
	I0908 10:30:02.325692  753065 main.go:141] libmachine: (addons-451875)     </disk>
	I0908 10:30:02.325698  753065 main.go:141] libmachine: (addons-451875)     <disk type='file' device='disk'>
	I0908 10:30:02.325710  753065 main.go:141] libmachine: (addons-451875)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0908 10:30:02.325760  753065 main.go:141] libmachine: (addons-451875)       <source file='/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/addons-451875.rawdisk'/>
	I0908 10:30:02.325785  753065 main.go:141] libmachine: (addons-451875)       <target dev='hda' bus='virtio'/>
	I0908 10:30:02.325796  753065 main.go:141] libmachine: (addons-451875)     </disk>
	I0908 10:30:02.325808  753065 main.go:141] libmachine: (addons-451875)     <interface type='network'>
	I0908 10:30:02.325820  753065 main.go:141] libmachine: (addons-451875)       <source network='mk-addons-451875'/>
	I0908 10:30:02.325831  753065 main.go:141] libmachine: (addons-451875)       <model type='virtio'/>
	I0908 10:30:02.325840  753065 main.go:141] libmachine: (addons-451875)     </interface>
	I0908 10:30:02.325847  753065 main.go:141] libmachine: (addons-451875)     <interface type='network'>
	I0908 10:30:02.325857  753065 main.go:141] libmachine: (addons-451875)       <source network='default'/>
	I0908 10:30:02.325867  753065 main.go:141] libmachine: (addons-451875)       <model type='virtio'/>
	I0908 10:30:02.325877  753065 main.go:141] libmachine: (addons-451875)     </interface>
	I0908 10:30:02.325888  753065 main.go:141] libmachine: (addons-451875)     <serial type='pty'>
	I0908 10:30:02.325899  753065 main.go:141] libmachine: (addons-451875)       <target port='0'/>
	I0908 10:30:02.325909  753065 main.go:141] libmachine: (addons-451875)     </serial>
	I0908 10:30:02.325918  753065 main.go:141] libmachine: (addons-451875)     <console type='pty'>
	I0908 10:30:02.325929  753065 main.go:141] libmachine: (addons-451875)       <target type='serial' port='0'/>
	I0908 10:30:02.325950  753065 main.go:141] libmachine: (addons-451875)     </console>
	I0908 10:30:02.325971  753065 main.go:141] libmachine: (addons-451875)     <rng model='virtio'>
	I0908 10:30:02.325989  753065 main.go:141] libmachine: (addons-451875)       <backend model='random'>/dev/random</backend>
	I0908 10:30:02.326006  753065 main.go:141] libmachine: (addons-451875)     </rng>
	I0908 10:30:02.326020  753065 main.go:141] libmachine: (addons-451875)     
	I0908 10:30:02.326036  753065 main.go:141] libmachine: (addons-451875)     
	I0908 10:30:02.326048  753065 main.go:141] libmachine: (addons-451875)   </devices>
	I0908 10:30:02.326057  753065 main.go:141] libmachine: (addons-451875) </domain>
	I0908 10:30:02.326069  753065 main.go:141] libmachine: (addons-451875) 
	I0908 10:30:02.330347  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:af:32:34 in network default
	I0908 10:30:02.330923  753065 main.go:141] libmachine: (addons-451875) starting domain...
	I0908 10:30:02.330958  753065 main.go:141] libmachine: (addons-451875) ensuring networks are active...
	I0908 10:30:02.330971  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:02.331829  753065 main.go:141] libmachine: (addons-451875) Ensuring network default is active
	I0908 10:30:02.332213  753065 main.go:141] libmachine: (addons-451875) Ensuring network mk-addons-451875 is active
	I0908 10:30:02.332740  753065 main.go:141] libmachine: (addons-451875) getting domain XML...
	I0908 10:30:02.333632  753065 main.go:141] libmachine: (addons-451875) creating domain...
	I0908 10:30:03.526564  753065 main.go:141] libmachine: (addons-451875) waiting for IP...
	I0908 10:30:03.527329  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:03.527759  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:03.527851  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:03.527770  753088 retry.go:31] will retry after 218.739357ms: waiting for domain to come up
	I0908 10:30:03.748187  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:03.748618  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:03.748647  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:03.748571  753088 retry.go:31] will retry after 290.044931ms: waiting for domain to come up
	I0908 10:30:04.040530  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:04.041032  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:04.041065  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:04.040965  753088 retry.go:31] will retry after 337.66236ms: waiting for domain to come up
	I0908 10:30:04.380661  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:04.381167  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:04.381241  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:04.381144  753088 retry.go:31] will retry after 544.022443ms: waiting for domain to come up
	I0908 10:30:04.926990  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:04.927493  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:04.927526  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:04.927428  753088 retry.go:31] will retry after 547.21064ms: waiting for domain to come up
	I0908 10:30:05.476227  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:05.476710  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:05.476746  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:05.476645  753088 retry.go:31] will retry after 804.701727ms: waiting for domain to come up
	I0908 10:30:06.283575  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:06.283986  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:06.284014  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:06.283959  753088 retry.go:31] will retry after 866.206487ms: waiting for domain to come up
	I0908 10:30:07.151871  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:07.152192  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:07.152224  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:07.152141  753088 retry.go:31] will retry after 1.337586462s: waiting for domain to come up
	I0908 10:30:08.492007  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:08.492428  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:08.492465  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:08.492415  753088 retry.go:31] will retry after 1.662745857s: waiting for domain to come up
	I0908 10:30:10.157426  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:10.157818  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:10.157849  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:10.157759  753088 retry.go:31] will retry after 1.428533541s: waiting for domain to come up
	I0908 10:30:11.588067  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:11.588544  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:11.588572  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:11.588499  753088 retry.go:31] will retry after 1.801926364s: waiting for domain to come up
	I0908 10:30:13.392457  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:13.392960  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:13.392996  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:13.392906  753088 retry.go:31] will retry after 3.390423292s: waiting for domain to come up
	I0908 10:30:16.784774  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:16.785164  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:16.785195  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:16.785104  753088 retry.go:31] will retry after 4.539667092s: waiting for domain to come up
	I0908 10:30:21.326671  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:21.327071  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find current IP address of domain addons-451875 in network mk-addons-451875
	I0908 10:30:21.327147  753065 main.go:141] libmachine: (addons-451875) DBG | I0908 10:30:21.327037  753088 retry.go:31] will retry after 5.347349474s: waiting for domain to come up
	I0908 10:30:26.676398  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:26.676813  753065 main.go:141] libmachine: (addons-451875) found domain IP: 192.168.39.92
	I0908 10:30:26.676839  753065 main.go:141] libmachine: (addons-451875) reserving static IP address...
	I0908 10:30:26.676880  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has current primary IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:26.677216  753065 main.go:141] libmachine: (addons-451875) DBG | unable to find host DHCP lease matching {name: "addons-451875", mac: "52:54:00:6b:ce:fb", ip: "192.168.39.92"} in network mk-addons-451875
	I0908 10:30:26.754483  753065 main.go:141] libmachine: (addons-451875) DBG | Getting to WaitForSSH function...
	I0908 10:30:26.754516  753065 main.go:141] libmachine: (addons-451875) reserved static IP address 192.168.39.92 for domain addons-451875
	I0908 10:30:26.754530  753065 main.go:141] libmachine: (addons-451875) waiting for SSH...
	I0908 10:30:26.757133  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:26.757574  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:26.757731  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:26.757752  753065 main.go:141] libmachine: (addons-451875) DBG | Using SSH client type: external
	I0908 10:30:26.757780  753065 main.go:141] libmachine: (addons-451875) DBG | Using SSH private key: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa (-rw-------)
	I0908 10:30:26.757808  753065 main.go:141] libmachine: (addons-451875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 10:30:26.757825  753065 main.go:141] libmachine: (addons-451875) DBG | About to run SSH command:
	I0908 10:30:26.757836  753065 main.go:141] libmachine: (addons-451875) DBG | exit 0
	I0908 10:30:26.881267  753065 main.go:141] libmachine: (addons-451875) DBG | SSH cmd err, output: <nil>: 
	I0908 10:30:26.881530  753065 main.go:141] libmachine: (addons-451875) KVM machine creation complete
	I0908 10:30:26.881904  753065 main.go:141] libmachine: (addons-451875) Calling .GetConfigRaw
	I0908 10:30:26.882518  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:26.882685  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:26.882842  753065 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0908 10:30:26.882858  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:26.884213  753065 main.go:141] libmachine: Detecting operating system of created instance...
	I0908 10:30:26.884239  753065 main.go:141] libmachine: Waiting for SSH to be available...
	I0908 10:30:26.884244  753065 main.go:141] libmachine: Getting to WaitForSSH function...
	I0908 10:30:26.884249  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:26.886402  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:26.886785  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:26.886821  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:26.886868  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:26.887039  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:26.887185  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:26.887313  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:26.887483  753065 main.go:141] libmachine: Using SSH client type: native
	I0908 10:30:26.887750  753065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0908 10:30:26.887763  753065 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0908 10:30:26.988792  753065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 10:30:26.988817  753065 main.go:141] libmachine: Detecting the provisioner...
	I0908 10:30:26.988825  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:26.991498  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:26.991918  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:26.991947  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:26.992057  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:26.992235  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:26.992408  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:26.992547  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:26.992696  753065 main.go:141] libmachine: Using SSH client type: native
	I0908 10:30:26.992948  753065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0908 10:30:26.992963  753065 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0908 10:30:27.094972  753065 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0908 10:30:27.095117  753065 main.go:141] libmachine: found compatible host: buildroot
	I0908 10:30:27.095148  753065 main.go:141] libmachine: Provisioning with buildroot...
	I0908 10:30:27.095165  753065 main.go:141] libmachine: (addons-451875) Calling .GetMachineName
	I0908 10:30:27.095438  753065 buildroot.go:166] provisioning hostname "addons-451875"
	I0908 10:30:27.095459  753065 main.go:141] libmachine: (addons-451875) Calling .GetMachineName
	I0908 10:30:27.095659  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:27.098404  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.098698  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:27.098720  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.098853  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:27.099038  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:27.099209  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:27.099316  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:27.099452  753065 main.go:141] libmachine: Using SSH client type: native
	I0908 10:30:27.099695  753065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0908 10:30:27.099709  753065 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-451875 && echo "addons-451875" | sudo tee /etc/hostname
	I0908 10:30:27.219684  753065 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-451875
	
	I0908 10:30:27.219718  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:27.222578  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.223007  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:27.223040  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.223230  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:27.223438  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:27.223577  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:27.223706  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:27.223854  753065 main.go:141] libmachine: Using SSH client type: native
	I0908 10:30:27.224165  753065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0908 10:30:27.224191  753065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-451875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-451875/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-451875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 10:30:27.338254  753065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 10:30:27.338292  753065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21503-748170/.minikube CaCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21503-748170/.minikube}
	I0908 10:30:27.338332  753065 buildroot.go:174] setting up certificates
	I0908 10:30:27.338350  753065 provision.go:84] configureAuth start
	I0908 10:30:27.338367  753065 main.go:141] libmachine: (addons-451875) Calling .GetMachineName
	I0908 10:30:27.338656  753065 main.go:141] libmachine: (addons-451875) Calling .GetIP
	I0908 10:30:27.341563  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.341946  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:27.341975  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.342226  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:27.344585  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.344948  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:27.344978  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.345132  753065 provision.go:143] copyHostCerts
	I0908 10:30:27.345215  753065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem (1675 bytes)
	I0908 10:30:27.345356  753065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem (1078 bytes)
	I0908 10:30:27.345429  753065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem (1123 bytes)
	I0908 10:30:27.345481  753065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem org=jenkins.addons-451875 san=[127.0.0.1 192.168.39.92 addons-451875 localhost minikube]
	I0908 10:30:27.512470  753065 provision.go:177] copyRemoteCerts
	I0908 10:30:27.512557  753065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 10:30:27.512604  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:27.515465  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.515844  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:27.515878  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.516068  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:27.516274  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:27.516452  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:27.516607  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:27.602355  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 10:30:27.631260  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 10:30:27.659349  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 10:30:27.687069  753065 provision.go:87] duration metric: took 348.694766ms to configureAuth
	I0908 10:30:27.687100  753065 buildroot.go:189] setting minikube options for container-runtime
	I0908 10:30:27.687368  753065 config.go:182] Loaded profile config "addons-451875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:30:27.687476  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:27.690172  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.690529  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:27.690562  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.690716  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:27.690890  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:27.691053  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:27.691223  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:27.691386  753065 main.go:141] libmachine: Using SSH client type: native
	I0908 10:30:27.691592  753065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0908 10:30:27.691605  753065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 10:30:27.928029  753065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 10:30:27.928062  753065 main.go:141] libmachine: Checking connection to Docker...
	I0908 10:30:27.928070  753065 main.go:141] libmachine: (addons-451875) Calling .GetURL
	I0908 10:30:27.929561  753065 main.go:141] libmachine: (addons-451875) DBG | using libvirt version 6000000
	I0908 10:30:27.932055  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.932356  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:27.932386  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.932535  753065 main.go:141] libmachine: Docker is up and running!
	I0908 10:30:27.932552  753065 main.go:141] libmachine: Reticulating splines...
	I0908 10:30:27.932561  753065 client.go:171] duration metric: took 26.834894573s to LocalClient.Create
	I0908 10:30:27.932597  753065 start.go:167] duration metric: took 26.834977926s to libmachine.API.Create "addons-451875"
	I0908 10:30:27.932610  753065 start.go:293] postStartSetup for "addons-451875" (driver="kvm2")
	I0908 10:30:27.932629  753065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 10:30:27.932657  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:27.932923  753065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 10:30:27.932957  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:27.936401  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.936831  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:27.936885  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:27.937052  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:27.937262  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:27.937425  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:27.937570  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:28.021975  753065 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 10:30:28.027220  753065 info.go:137] Remote host: Buildroot 2025.02
	I0908 10:30:28.027249  753065 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/addons for local assets ...
	I0908 10:30:28.027343  753065 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/files for local assets ...
	I0908 10:30:28.027380  753065 start.go:296] duration metric: took 94.762007ms for postStartSetup
	I0908 10:30:28.027433  753065 main.go:141] libmachine: (addons-451875) Calling .GetConfigRaw
	I0908 10:30:28.028063  753065 main.go:141] libmachine: (addons-451875) Calling .GetIP
	I0908 10:30:28.030867  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.031264  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:28.031292  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.031540  753065 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/config.json ...
	I0908 10:30:28.031746  753065 start.go:128] duration metric: took 26.952961774s to createHost
	I0908 10:30:28.031781  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:28.033641  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.034010  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:28.034110  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.034231  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:28.034397  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:28.034563  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:28.034659  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:28.034789  753065 main.go:141] libmachine: Using SSH client type: native
	I0908 10:30:28.035062  753065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0908 10:30:28.035076  753065 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 10:30:28.138939  753065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757327428.116100463
	
	I0908 10:30:28.138968  753065 fix.go:216] guest clock: 1757327428.116100463
	I0908 10:30:28.138976  753065 fix.go:229] Guest: 2025-09-08 10:30:28.116100463 +0000 UTC Remote: 2025-09-08 10:30:28.031763079 +0000 UTC m=+27.055000995 (delta=84.337384ms)
	I0908 10:30:28.139015  753065 fix.go:200] guest clock delta is within tolerance: 84.337384ms
	I0908 10:30:28.139020  753065 start.go:83] releasing machines lock for "addons-451875", held for 27.060302817s
	I0908 10:30:28.139061  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:28.139355  753065 main.go:141] libmachine: (addons-451875) Calling .GetIP
	I0908 10:30:28.142285  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.142638  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:28.142661  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.142906  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:28.143422  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:28.143619  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:28.143744  753065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 10:30:28.143804  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:28.143845  753065 ssh_runner.go:195] Run: cat /version.json
	I0908 10:30:28.143872  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:28.146523  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.146806  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:28.146837  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.146864  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.146982  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:28.147157  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:28.147330  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:28.147352  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:28.147361  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:28.147531  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:28.147562  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:28.147722  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:28.147877  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:28.148002  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:28.258208  753065 ssh_runner.go:195] Run: systemctl --version
	I0908 10:30:28.264492  753065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 10:30:28.426146  753065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 10:30:28.433012  753065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 10:30:28.433102  753065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 10:30:28.452909  753065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 10:30:28.452938  753065 start.go:495] detecting cgroup driver to use...
	I0908 10:30:28.453052  753065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 10:30:28.471676  753065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 10:30:28.489355  753065 docker.go:218] disabling cri-docker service (if available) ...
	I0908 10:30:28.489427  753065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 10:30:28.505486  753065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 10:30:28.521010  753065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 10:30:28.654935  753065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 10:30:28.792408  753065 docker.go:234] disabling docker service ...
	I0908 10:30:28.792478  753065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 10:30:28.809194  753065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 10:30:28.824155  753065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 10:30:29.029689  753065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 10:30:29.167034  753065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 10:30:29.183012  753065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 10:30:29.205796  753065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 10:30:29.205880  753065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:30:29.217935  753065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 10:30:29.218004  753065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:30:29.230171  753065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:30:29.242414  753065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:30:29.254732  753065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 10:30:29.267194  753065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:30:29.278994  753065 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:30:29.299856  753065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:30:29.312279  753065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 10:30:29.322351  753065 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 10:30:29.322418  753065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 10:30:29.343229  753065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 10:30:29.355602  753065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 10:30:29.494495  753065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 10:30:29.603329  753065 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 10:30:29.603435  753065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 10:30:29.608746  753065 start.go:563] Will wait 60s for crictl version
	I0908 10:30:29.608825  753065 ssh_runner.go:195] Run: which crictl
	I0908 10:30:29.612844  753065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 10:30:29.656238  753065 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 10:30:29.656357  753065 ssh_runner.go:195] Run: crio --version
	I0908 10:30:29.684882  753065 ssh_runner.go:195] Run: crio --version
	I0908 10:30:29.715451  753065 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 10:30:29.716505  753065 main.go:141] libmachine: (addons-451875) Calling .GetIP
	I0908 10:30:29.719143  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:29.719444  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:29.719470  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:29.719677  753065 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 10:30:29.724133  753065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 10:30:29.741174  753065 kubeadm.go:875] updating cluster {Name:addons-451875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:addons-451875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 10:30:29.741314  753065 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 10:30:29.741383  753065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 10:30:29.782901  753065 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0908 10:30:29.782975  753065 ssh_runner.go:195] Run: which lz4
	I0908 10:30:29.787621  753065 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 10:30:29.793039  753065 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 10:30:29.793062  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0908 10:30:31.334190  753065 crio.go:462] duration metric: took 1.546601297s to copy over tarball
	I0908 10:30:31.334289  753065 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 10:30:32.923045  753065 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.588710712s)
	I0908 10:30:32.923087  753065 crio.go:469] duration metric: took 1.588863819s to extract the tarball
	I0908 10:30:32.923095  753065 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 10:30:32.963388  753065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 10:30:33.014794  753065 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 10:30:33.014821  753065 cache_images.go:85] Images are preloaded, skipping loading
	I0908 10:30:33.014829  753065 kubeadm.go:926] updating node { 192.168.39.92 8443 v1.34.0 crio true true} ...
	I0908 10:30:33.014956  753065 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-451875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-451875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 10:30:33.015024  753065 ssh_runner.go:195] Run: crio config
	I0908 10:30:33.062975  753065 cni.go:84] Creating CNI manager for ""
	I0908 10:30:33.063009  753065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 10:30:33.063023  753065 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 10:30:33.063045  753065 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-451875 NodeName:addons-451875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 10:30:33.063177  753065 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-451875"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 10:30:33.063246  753065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 10:30:33.076092  753065 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 10:30:33.076185  753065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 10:30:33.088730  753065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0908 10:30:33.109743  753065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 10:30:33.130085  753065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0908 10:30:33.150608  753065 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I0908 10:30:33.155317  753065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 10:30:33.170433  753065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 10:30:33.310278  753065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 10:30:33.345419  753065 certs.go:68] Setting up /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875 for IP: 192.168.39.92
	I0908 10:30:33.345445  753065 certs.go:194] generating shared ca certs ...
	I0908 10:30:33.345463  753065 certs.go:226] acquiring lock for ca certs: {Name:mkaa8fe7cb1fe9bdb745b85589d42151c557e20e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:33.345603  753065 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key
	I0908 10:30:33.553866  753065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt ...
	I0908 10:30:33.553897  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt: {Name:mkb40f234b537b250abab0d6f9208af60298e00a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:33.554078  753065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key ...
	I0908 10:30:33.554089  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key: {Name:mk03b3b084572bf7275a2204133e6a38f327a138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:33.554163  753065 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key
	I0908 10:30:33.853311  753065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.crt ...
	I0908 10:30:33.853342  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.crt: {Name:mk1c43ff96042554c2d97c416ba1adb9a2023685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:33.853505  753065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key ...
	I0908 10:30:33.853517  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key: {Name:mk89eb0594a5bbae2488aa0843eef5f081b4cd47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:33.853585  753065 certs.go:256] generating profile certs ...
	I0908 10:30:33.853647  753065 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.key
	I0908 10:30:33.853661  753065 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt with IP's: []
	I0908 10:30:33.911338  753065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt ...
	I0908 10:30:33.911374  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: {Name:mke82969b1ca430d641a3012b16d9d2741477eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:33.911529  753065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.key ...
	I0908 10:30:33.911538  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.key: {Name:mk67c675d7664911a4b2e475ff70db5214a346ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:33.911611  753065 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.key.7c3d55ec
	I0908 10:30:33.911631  753065 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.crt.7c3d55ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92]
	I0908 10:30:34.114062  753065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.crt.7c3d55ec ...
	I0908 10:30:34.114098  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.crt.7c3d55ec: {Name:mkb0479fb2399111e164a38a6b8527e49d08bd32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:34.114293  753065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.key.7c3d55ec ...
	I0908 10:30:34.114308  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.key.7c3d55ec: {Name:mkdb51e01ff9b3f7f3e80d711dffa1305457514c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:34.114389  753065 certs.go:381] copying /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.crt.7c3d55ec -> /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.crt
	I0908 10:30:34.114488  753065 certs.go:385] copying /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.key.7c3d55ec -> /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.key
	I0908 10:30:34.114546  753065 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/proxy-client.key
	I0908 10:30:34.114566  753065 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/proxy-client.crt with IP's: []
	I0908 10:30:34.187428  753065 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/proxy-client.crt ...
	I0908 10:30:34.187458  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/proxy-client.crt: {Name:mk9c0003303d704f9e99b2e0052a27108a66e47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:34.187620  753065 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/proxy-client.key ...
	I0908 10:30:34.187633  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/proxy-client.key: {Name:mk7f2b870d86a43dfbc2c36ac16eff13e046d585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:34.187797  753065 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 10:30:34.187833  753065 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem (1078 bytes)
	I0908 10:30:34.187861  753065 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem (1123 bytes)
	I0908 10:30:34.187885  753065 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem (1675 bytes)
	I0908 10:30:34.188454  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 10:30:34.230632  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 10:30:34.266981  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 10:30:34.296753  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 10:30:34.325352  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 10:30:34.353836  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 10:30:34.382187  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 10:30:34.410221  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 10:30:34.439149  753065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 10:30:34.467705  753065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 10:30:34.487295  753065 ssh_runner.go:195] Run: openssl version
	I0908 10:30:34.493936  753065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 10:30:34.506745  753065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 10:30:34.511745  753065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0908 10:30:34.511811  753065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 10:30:34.518726  753065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 10:30:34.531141  753065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 10:30:34.535730  753065 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 10:30:34.535800  753065 kubeadm.go:392] StartCluster: {Name:addons-451875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 C
lusterName:addons-451875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:30:34.535896  753065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 10:30:34.535984  753065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 10:30:34.576494  753065 cri.go:89] found id: ""
	I0908 10:30:34.576587  753065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 10:30:34.588348  753065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 10:30:34.600935  753065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 10:30:34.613812  753065 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 10:30:34.613831  753065 kubeadm.go:157] found existing configuration files:
	
	I0908 10:30:34.613876  753065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 10:30:34.624339  753065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 10:30:34.624396  753065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 10:30:34.636144  753065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 10:30:34.647001  753065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 10:30:34.647075  753065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 10:30:34.658661  753065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 10:30:34.669387  753065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 10:30:34.669461  753065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 10:30:34.680879  753065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 10:30:34.691311  753065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 10:30:34.691372  753065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 10:30:34.702471  753065 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0908 10:30:34.752017  753065 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 10:30:34.752103  753065 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 10:30:34.847819  753065 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 10:30:34.847945  753065 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 10:30:34.848053  753065 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 10:30:34.858858  753065 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 10:30:34.995073  753065 out.go:252]   - Generating certificates and keys ...
	I0908 10:30:34.995206  753065 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 10:30:34.995304  753065 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 10:30:35.030938  753065 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 10:30:35.268859  753065 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 10:30:35.555411  753065 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 10:30:35.666933  753065 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 10:30:35.932961  753065 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 10:30:35.933150  753065 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-451875 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0908 10:30:36.081539  753065 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 10:30:36.081840  753065 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-451875 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0908 10:30:36.219702  753065 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 10:30:36.616147  753065 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 10:30:36.862812  753065 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 10:30:36.862998  753065 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 10:30:37.143657  753065 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 10:30:37.162989  753065 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 10:30:37.232597  753065 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 10:30:37.433011  753065 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 10:30:37.611745  753065 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 10:30:37.611837  753065 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 10:30:37.613781  753065 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 10:30:37.615270  753065 out.go:252]   - Booting up control plane ...
	I0908 10:30:37.615352  753065 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 10:30:37.615459  753065 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 10:30:37.616050  753065 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 10:30:37.638964  753065 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 10:30:37.639134  753065 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 10:30:37.646929  753065 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 10:30:37.647331  753065 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 10:30:37.647422  753065 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 10:30:37.804513  753065 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 10:30:37.804618  753065 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 10:30:39.804682  753065 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001384849s
	I0908 10:30:39.807121  753065 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 10:30:39.807249  753065 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.92:8443/livez
	I0908 10:30:39.807381  753065 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 10:30:39.807530  753065 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 10:30:42.010981  753065 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.204561373s
	I0908 10:30:43.123141  753065 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.317577444s
	I0908 10:30:44.806515  753065 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.00144892s
	I0908 10:30:44.819054  753065 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 10:30:44.835674  753065 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 10:30:44.850026  753065 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 10:30:44.850330  753065 kubeadm.go:310] [mark-control-plane] Marking the node addons-451875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 10:30:44.868147  753065 kubeadm.go:310] [bootstrap-token] Using token: jqfzua.xaeg0ql5z0x6mzpq
	I0908 10:30:44.869346  753065 out.go:252]   - Configuring RBAC rules ...
	I0908 10:30:44.869485  753065 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 10:30:44.877302  753065 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 10:30:44.886010  753065 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 10:30:44.890655  753065 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 10:30:44.894182  753065 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 10:30:44.899542  753065 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 10:30:45.215857  753065 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 10:30:45.647438  753065 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 10:30:46.212625  753065 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 10:30:46.213542  753065 kubeadm.go:310] 
	I0908 10:30:46.213616  753065 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 10:30:46.213628  753065 kubeadm.go:310] 
	I0908 10:30:46.213749  753065 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 10:30:46.213786  753065 kubeadm.go:310] 
	I0908 10:30:46.213835  753065 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 10:30:46.213911  753065 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 10:30:46.214007  753065 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 10:30:46.214040  753065 kubeadm.go:310] 
	I0908 10:30:46.214163  753065 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 10:30:46.214183  753065 kubeadm.go:310] 
	I0908 10:30:46.214256  753065 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 10:30:46.214269  753065 kubeadm.go:310] 
	I0908 10:30:46.214349  753065 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 10:30:46.214480  753065 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 10:30:46.214580  753065 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 10:30:46.214595  753065 kubeadm.go:310] 
	I0908 10:30:46.214740  753065 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 10:30:46.214847  753065 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 10:30:46.214864  753065 kubeadm.go:310] 
	I0908 10:30:46.215011  753065 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jqfzua.xaeg0ql5z0x6mzpq \
	I0908 10:30:46.215170  753065 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bbde5405975015570f9d9c8637bc9278c153b27b847418447e83141708646857 \
	I0908 10:30:46.215201  753065 kubeadm.go:310] 	--control-plane 
	I0908 10:30:46.215208  753065 kubeadm.go:310] 
	I0908 10:30:46.215323  753065 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 10:30:46.215333  753065 kubeadm.go:310] 
	I0908 10:30:46.215460  753065 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jqfzua.xaeg0ql5z0x6mzpq \
	I0908 10:30:46.215597  753065 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bbde5405975015570f9d9c8637bc9278c153b27b847418447e83141708646857 
	I0908 10:30:46.217605  753065 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 10:30:46.217674  753065 cni.go:84] Creating CNI manager for ""
	I0908 10:30:46.217691  753065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 10:30:46.220002  753065 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 10:30:46.221119  753065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 10:30:46.236016  753065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 10:30:46.258298  753065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 10:30:46.258447  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:46.258454  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-451875 minikube.k8s.io/updated_at=2025_09_08T10_30_46_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2 minikube.k8s.io/name=addons-451875 minikube.k8s.io/primary=true
	I0908 10:30:46.297741  753065 ops.go:34] apiserver oom_adj: -16
	I0908 10:30:46.397600  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:46.897862  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:47.398591  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:47.897939  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:48.397847  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:48.898433  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:49.397777  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:49.898492  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:50.398341  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:50.898542  753065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:30:51.007927  753065 kubeadm.go:1105] duration metric: took 4.749569033s to wait for elevateKubeSystemPrivileges
	I0908 10:30:51.007970  753065 kubeadm.go:394] duration metric: took 16.47217771s to StartCluster
	I0908 10:30:51.007996  753065 settings.go:142] acquiring lock: {Name:mk18c67e9470bbfdfeaf7a5d3ce5d7a1813bc966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:51.008151  753065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 10:30:51.008707  753065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:30:51.008979  753065 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 10:30:51.009003  753065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 10:30:51.009091  753065 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 10:30:51.009217  753065 config.go:182] Loaded profile config "addons-451875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:30:51.009246  753065 addons.go:69] Setting yakd=true in profile "addons-451875"
	I0908 10:30:51.009258  753065 addons.go:69] Setting cloud-spanner=true in profile "addons-451875"
	I0908 10:30:51.009274  753065 addons.go:238] Setting addon cloud-spanner=true in "addons-451875"
	I0908 10:30:51.009286  753065 addons.go:69] Setting storage-provisioner=true in profile "addons-451875"
	I0908 10:30:51.009301  753065 addons.go:69] Setting volcano=true in profile "addons-451875"
	I0908 10:30:51.009312  753065 addons.go:238] Setting addon volcano=true in "addons-451875"
	I0908 10:30:51.009314  753065 addons.go:238] Setting addon storage-provisioner=true in "addons-451875"
	I0908 10:30:51.009327  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.009320  753065 addons.go:69] Setting registry-creds=true in profile "addons-451875"
	I0908 10:30:51.009375  753065 addons.go:69] Setting ingress-dns=true in profile "addons-451875"
	I0908 10:30:51.009382  753065 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-451875"
	I0908 10:30:51.009394  753065 addons.go:238] Setting addon registry-creds=true in "addons-451875"
	I0908 10:30:51.009396  753065 addons.go:238] Setting addon ingress-dns=true in "addons-451875"
	I0908 10:30:51.009382  753065 addons.go:69] Setting gcp-auth=true in profile "addons-451875"
	I0908 10:30:51.009424  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.009442  753065 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-451875"
	I0908 10:30:51.009426  753065 addons.go:69] Setting volumesnapshots=true in profile "addons-451875"
	I0908 10:30:51.009331  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.009470  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.009470  753065 addons.go:69] Setting default-storageclass=true in profile "addons-451875"
	I0908 10:30:51.009482  753065 addons.go:238] Setting addon volumesnapshots=true in "addons-451875"
	I0908 10:30:51.009489  753065 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-451875"
	I0908 10:30:51.009549  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.009866  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.009897  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.009900  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.009914  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.009252  753065 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-451875"
	I0908 10:30:51.009923  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.009935  753065 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-451875"
	I0908 10:30:51.009951  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.009274  753065 addons.go:238] Setting addon yakd=true in "addons-451875"
	I0908 10:30:51.009985  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.009989  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.009921  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.010022  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.009339  753065 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-451875"
	I0908 10:30:51.009363  753065 addons.go:69] Setting ingress=true in profile "addons-451875"
	I0908 10:30:51.010056  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.009464  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.010067  753065 addons.go:238] Setting addon ingress=true in "addons-451875"
	I0908 10:30:51.009347  753065 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-451875"
	I0908 10:30:51.010073  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.010088  753065 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-451875"
	I0908 10:30:51.010092  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.009964  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.009352  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.010394  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.010434  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.010444  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.010462  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.010490  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.009245  753065 addons.go:69] Setting inspektor-gadget=true in profile "addons-451875"
	I0908 10:30:51.010510  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.010534  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.010509  753065 addons.go:238] Setting addon inspektor-gadget=true in "addons-451875"
	I0908 10:30:51.009357  753065 addons.go:69] Setting metrics-server=true in profile "addons-451875"
	I0908 10:30:51.010639  753065 addons.go:238] Setting addon metrics-server=true in "addons-451875"
	I0908 10:30:51.010655  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.010691  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.010706  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.009369  753065 addons.go:69] Setting registry=true in profile "addons-451875"
	I0908 10:30:51.009465  753065 mustload.go:65] Loading cluster: addons-451875
	I0908 10:30:51.010921  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.010988  753065 addons.go:238] Setting addon registry=true in "addons-451875"
	I0908 10:30:51.010050  753065 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-451875"
	I0908 10:30:51.011370  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.011405  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.011410  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.011427  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.011498  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.013525  753065 out.go:179] * Verifying Kubernetes components...
	I0908 10:30:51.014861  753065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 10:30:51.015190  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.015592  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.015611  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.031107  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0908 10:30:51.031134  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0908 10:30:51.031158  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41995
	I0908 10:30:51.031118  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39363
	I0908 10:30:51.031694  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I0908 10:30:51.032249  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.032859  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.032883  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.033255  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.042021  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42053
	I0908 10:30:51.045741  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37111
	I0908 10:30:51.045816  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.045860  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.045869  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.045895  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.046040  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.046079  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.047142  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.047181  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.053879  753065 config.go:182] Loaded profile config "addons-451875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:30:51.054311  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.054358  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.057558  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34391
	I0908 10:30:51.057639  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.057776  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.057785  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.057879  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.058626  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.058646  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.058666  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.058710  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.058729  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.058765  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.058829  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.058835  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.058844  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.058848  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.058847  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.059274  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.059291  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.059351  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.059469  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.059678  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.059694  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.060174  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.060214  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.060637  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.060734  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.060798  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.060831  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.061077  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.061112  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.061357  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.061397  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.061667  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.061710  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.062274  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.062309  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.063016  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.063065  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.063784  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.063806  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.064262  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.064512  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.069074  753065 addons.go:238] Setting addon default-storageclass=true in "addons-451875"
	I0908 10:30:51.069125  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.069524  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.069555  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.083946  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37221
	I0908 10:30:51.084631  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.085382  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.085404  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.087435  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I0908 10:30:51.087580  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34583
	I0908 10:30:51.088033  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.088209  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.088591  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.088613  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.088675  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.089086  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.089266  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.089289  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.090155  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.090198  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.090714  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
	I0908 10:30:51.091072  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.091644  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.091694  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.091914  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.091934  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0908 10:30:51.091920  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.092475  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.092668  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.092690  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.093222  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.093284  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.093406  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.093786  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.093897  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.094696  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.094722  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.095562  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37973
	I0908 10:30:51.096374  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0908 10:30:51.096768  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.096875  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.097486  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.097511  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.097796  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.098565  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.098591  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.099045  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.099339  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.099771  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.099805  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.100036  753065 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0908 10:30:51.100703  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.100773  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.101887  753065 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 10:30:51.101908  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 10:30:51.101930  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.104678  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0908 10:30:51.105350  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.106081  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.106100  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.106163  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.106582  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.106602  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.107092  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.107326  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.107524  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.107695  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.108484  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.108719  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.110713  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.112911  753065 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 10:30:51.113986  753065 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 10:30:51.114022  753065 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 10:30:51.114075  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.114288  753065 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-451875"
	I0908 10:30:51.114372  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.114753  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.114803  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.117490  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
	I0908 10:30:51.117981  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.118072  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.118871  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.118905  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.119127  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.119327  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.119527  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.119699  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.120428  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I0908 10:30:51.120961  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.121541  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.121561  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.121996  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.122265  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.122596  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38595
	I0908 10:30:51.123072  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.123598  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.123615  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.123999  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.124190  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.125172  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.125189  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.125599  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.126169  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.126221  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.127817  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0908 10:30:51.128480  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.129035  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.129053  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.129504  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.129685  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.130160  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.132532  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:30:51.132549  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:30:51.132658  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36967
	I0908 10:30:51.133444  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:30:51.133478  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:30:51.133487  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:30:51.133493  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:30:51.133500  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:30:51.133581  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45453
	I0908 10:30:51.133690  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:30:51.133706  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	W0908 10:30:51.133806  753065 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 10:30:51.133971  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.134369  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.134690  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.134708  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.135132  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.135369  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.136741  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0908 10:30:51.136984  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.136999  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.138072  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.138151  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.138200  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.138253  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.139646  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.139690  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.139955  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0908 10:30:51.140028  753065 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 10:30:51.140092  753065 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 10:30:51.140109  753065 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 10:30:51.140552  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.141079  753065 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 10:30:51.141073  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.141097  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 10:30:51.141119  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.141276  753065 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 10:30:51.141291  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 10:30:51.141306  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.141687  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.141704  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.142114  753065 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 10:30:51.142133  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 10:30:51.142151  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.142188  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.142837  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.142934  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.142951  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.143650  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.143878  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.144389  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0908 10:30:51.144838  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.145307  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.145325  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.145788  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.145788  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.146304  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.146618  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0908 10:30:51.147104  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.147215  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.147719  753065 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 10:30:51.148067  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.148104  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.148181  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.148214  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.148342  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.148516  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.148675  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.148694  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.148758  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.148914  753065 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 10:30:51.148933  753065 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 10:30:51.148954  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.149045  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.149522  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.149606  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.149629  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.150193  753065 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 10:30:51.150859  753065 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 10:30:51.151151  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
	I0908 10:30:51.151671  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.151692  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.151748  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.151846  753065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 10:30:51.151953  753065 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 10:30:51.151976  753065 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 10:30:51.151997  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.152110  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:51.152576  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.152626  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.152883  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.152972  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.153002  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.153027  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.153159  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.153180  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.153631  753065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 10:30:51.153762  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.153845  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.153897  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.153954  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.153990  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.154036  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.154076  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.154092  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.154119  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.154436  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.154623  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.154678  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.154710  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.155660  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.155697  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.155834  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33867
	I0908 10:30:51.156006  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.156321  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0908 10:30:51.156415  753065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 10:30:51.156535  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.156607  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.156648  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.156666  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.156838  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.156931  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.157030  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.157050  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.157186  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.157360  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.157453  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.157490  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.157607  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.157644  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.158275  753065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 10:30:51.158276  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.158283  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.158515  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.158556  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.159756  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33861
	I0908 10:30:51.160413  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.160558  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.160969  753065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 10:30:51.161041  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.161783  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.161813  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.161964  753065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 10:30:51.162469  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.162659  753065 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 10:30:51.162858  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.163675  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
	I0908 10:30:51.164334  753065 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 10:30:51.164372  753065 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 10:30:51.164726  753065 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 10:30:51.164749  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.165328  753065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 10:30:51.165711  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.166617  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.166635  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.166738  753065 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 10:30:51.166930  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.167593  753065 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 10:30:51.167614  753065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 10:30:51.167646  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.167709  753065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 10:30:51.168812  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.168897  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.168989  753065 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 10:30:51.169004  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 10:30:51.169024  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.169096  753065 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 10:30:51.169907  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.170022  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.170149  753065 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 10:30:51.170186  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 10:30:51.170206  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.170382  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.170575  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.170808  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.171023  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.171479  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.171934  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.171956  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.172589  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:51.172640  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:51.172941  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.173220  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.173557  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.173744  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.174148  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.174670  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.174713  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.175140  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.175364  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.175555  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.175737  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.176934  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0908 10:30:51.178453  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.178563  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.179131  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.179149  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.179747  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.179976  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.180423  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0908 10:30:51.180819  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.180840  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.181069  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.181297  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.181398  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0908 10:30:51.181500  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.181660  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.181820  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.181975  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.181976  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.182670  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.182693  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.183085  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.183221  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.183230  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.183445  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.183574  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.183734  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.183749  753065 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 10:30:51.184915  753065 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 10:30:51.185107  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.185357  753065 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 10:30:51.185378  753065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 10:30:51.185398  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.185881  753065 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 10:30:51.185909  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 10:30:51.185928  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.188913  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.189367  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.189388  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.189571  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.189812  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.189998  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.190133  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.190492  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.190844  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.190863  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.191057  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.191207  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.191392  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.191529  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.192734  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41251
	I0908 10:30:51.193212  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.193360  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I0908 10:30:51.193778  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.193793  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.194082  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:51.194197  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.194382  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.194499  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:51.194515  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:51.194913  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:51.195092  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:51.196267  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.196738  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:51.198457  753065 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 10:30:51.198463  753065 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 10:30:51.199532  753065 out.go:179]   - Using image docker.io/busybox:stable
	I0908 10:30:51.199534  753065 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 10:30:51.199619  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 10:30:51.199637  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.200571  753065 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 10:30:51.200591  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 10:30:51.200611  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:51.202887  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.203473  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.203508  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.203659  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.203841  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.203989  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.204142  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.204826  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.205356  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:51.205372  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:51.205553  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:51.205722  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:51.205832  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:51.205939  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:51.552087  753065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 10:30:51.552132  753065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 10:30:51.951443  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 10:30:52.008879  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 10:30:52.016630  753065 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 10:30:52.016663  753065 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 10:30:52.042759  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 10:30:52.097737  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 10:30:52.193270  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 10:30:52.217886  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 10:30:52.228502  753065 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 10:30:52.228549  753065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 10:30:52.245107  753065 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 10:30:52.245130  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 10:30:52.249690  753065 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 10:30:52.249715  753065 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 10:30:52.364951  753065 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 10:30:52.364987  753065 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 10:30:52.384386  753065 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:30:52.384414  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 10:30:52.452674  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 10:30:52.593083  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 10:30:52.593419  753065 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 10:30:52.593439  753065 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 10:30:52.621926  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 10:30:52.943057  753065 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 10:30:52.943087  753065 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 10:30:52.990567  753065 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 10:30:52.990598  753065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 10:30:52.992992  753065 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 10:30:52.993021  753065 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 10:30:53.060063  753065 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 10:30:53.060091  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 10:30:53.078394  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:30:53.214663  753065 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 10:30:53.214700  753065 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 10:30:53.235153  753065 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 10:30:53.235189  753065 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 10:30:53.354560  753065 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 10:30:53.354594  753065 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 10:30:53.364658  753065 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 10:30:53.364690  753065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 10:30:53.438589  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 10:30:53.547524  753065 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 10:30:53.547552  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 10:30:53.548618  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 10:30:53.721005  753065 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 10:30:53.721038  753065 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 10:30:53.765510  753065 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 10:30:53.765543  753065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 10:30:53.897328  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 10:30:54.157470  753065 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 10:30:54.157511  753065 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 10:30:54.167790  753065 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 10:30:54.167815  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 10:30:54.711434  753065 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 10:30:54.711471  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 10:30:54.728879  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 10:30:54.732383  753065 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.180213342s)
	I0908 10:30:54.732419  753065 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0908 10:30:54.732468  753065 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.18034289s)
	I0908 10:30:54.732601  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.781109198s)
	I0908 10:30:54.732654  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:30:54.732673  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:30:54.733040  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:30:54.733051  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:30:54.733125  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:30:54.733140  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:30:54.733150  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:30:54.733473  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:30:54.733530  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:30:54.733567  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:30:54.733605  753065 node_ready.go:35] waiting up to 6m0s for node "addons-451875" to be "Ready" ...
	I0908 10:30:54.764781  753065 node_ready.go:49] node "addons-451875" is "Ready"
	I0908 10:30:54.764827  753065 node_ready.go:38] duration metric: took 31.178893ms for node "addons-451875" to be "Ready" ...
	I0908 10:30:54.764850  753065 api_server.go:52] waiting for apiserver process to appear ...
	I0908 10:30:54.764926  753065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 10:30:55.242978  753065 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-451875" context rescaled to 1 replicas
	I0908 10:30:55.263806  753065 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 10:30:55.263843  753065 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 10:30:55.553480  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.510673013s)
	I0908 10:30:55.553548  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:30:55.553563  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:30:55.553568  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.544647253s)
	I0908 10:30:55.553616  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:30:55.553635  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:30:55.553945  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:30:55.553962  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:30:55.553983  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:30:55.553992  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:30:55.554113  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:30:55.554135  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:30:55.554174  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:30:55.554200  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:30:55.554215  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:30:55.554283  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:30:55.554310  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:30:55.554404  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:30:55.554445  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:30:55.554460  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:30:55.555469  753065 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 10:30:55.555491  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 10:30:55.693548  753065 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 10:30:55.693588  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 10:30:56.038842  753065 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 10:30:56.038883  753065 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 10:30:56.276146  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 10:30:58.644206  753065 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 10:30:58.644259  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:58.648336  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:58.648835  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:58.648876  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:58.649005  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:58.649277  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:58.649472  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:58.649686  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:30:58.894622  753065 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 10:30:59.000703  753065 addons.go:238] Setting addon gcp-auth=true in "addons-451875"
	I0908 10:30:59.000773  753065 host.go:66] Checking if "addons-451875" exists ...
	I0908 10:30:59.001095  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:59.001176  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:59.017333  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I0908 10:30:59.017889  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:59.018363  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:59.018384  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:59.018680  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:59.019326  753065 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:30:59.019380  753065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:30:59.035781  753065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0908 10:30:59.036353  753065 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:30:59.036926  753065 main.go:141] libmachine: Using API Version  1
	I0908 10:30:59.036955  753065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:30:59.037311  753065 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:30:59.037518  753065 main.go:141] libmachine: (addons-451875) Calling .GetState
	I0908 10:30:59.039288  753065 main.go:141] libmachine: (addons-451875) Calling .DriverName
	I0908 10:30:59.039527  753065 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 10:30:59.039554  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHHostname
	I0908 10:30:59.042841  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:59.043272  753065 main.go:141] libmachine: (addons-451875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ce:fb", ip: ""} in network mk-addons-451875: {Iface:virbr1 ExpiryTime:2025-09-08 11:30:17 +0000 UTC Type:0 Mac:52:54:00:6b:ce:fb Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-451875 Clientid:01:52:54:00:6b:ce:fb}
	I0908 10:30:59.043301  753065 main.go:141] libmachine: (addons-451875) DBG | domain addons-451875 has defined IP address 192.168.39.92 and MAC address 52:54:00:6b:ce:fb in network mk-addons-451875
	I0908 10:30:59.043519  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHPort
	I0908 10:30:59.043729  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHKeyPath
	I0908 10:30:59.043897  753065 main.go:141] libmachine: (addons-451875) Calling .GetSSHUsername
	I0908 10:30:59.044063  753065 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/addons-451875/id_rsa Username:docker}
	I0908 10:31:00.019153  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.921365948s)
	I0908 10:31:00.019211  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019226  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019244  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.825926846s)
	I0908 10:31:00.019295  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019312  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019329  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.801411099s)
	I0908 10:31:00.019369  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019427  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019490  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.426384396s)
	I0908 10:31:00.019429  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.566727597s)
	I0908 10:31:00.019549  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019551  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.019561  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019570  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.019578  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.397626113s)
	I0908 10:31:00.019582  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019593  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019523  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019617  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019593  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019647  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019676  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.019673  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.941252802s)
	I0908 10:31:00.019704  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.581089052s)
	I0908 10:31:00.019724  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019733  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019732  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.019759  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.019773  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	W0908 10:31:00.019704  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:00.019784  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019794  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019799  753065 retry.go:31] will retry after 255.15896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:00.019812  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.471166152s)
	I0908 10:31:00.019829  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019837  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019851  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.122490072s)
	I0908 10:31:00.019866  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.019875  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019883  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.019886  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.019893  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.019902  753065 addons.go:479] Verifying addon ingress=true in "addons-451875"
	I0908 10:31:00.019939  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.019947  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.019956  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.019962  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.020241  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.020272  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.020280  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.020288  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.020296  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.020349  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.020368  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.020374  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.020382  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.020388  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.020424  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.020437  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.020454  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.020460  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.020467  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.020474  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.020514  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.020521  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.020566  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.020586  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.020593  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.020854  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.020885  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.020892  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.020900  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.020907  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.020969  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.020992  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.021001  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.021010  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.021017  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.021109  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.021129  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.021139  753065 addons.go:479] Verifying addon metrics-server=true in "addons-451875"
	I0908 10:31:00.021778  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.021810  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.021818  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.023129  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.023161  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.023167  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.023848  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.023874  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.023880  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.023889  753065 addons.go:479] Verifying addon registry=true in "addons-451875"
	I0908 10:31:00.024469  753065 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-451875 service yakd-dashboard -n yakd-dashboard
	
	I0908 10:31:00.025209  753065 out.go:179] * Verifying registry addon...
	I0908 10:31:00.026009  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.026049  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.026056  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.026491  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.026509  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.026518  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.026550  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.026833  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.026851  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.027874  753065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 10:31:00.029010  753065 out.go:179] * Verifying ingress addon...
	I0908 10:31:00.030462  753065 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 10:31:00.036390  753065 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.27144183s)
	I0908 10:31:00.036419  753065 api_server.go:72] duration metric: took 9.027406153s to wait for apiserver process to appear ...
	I0908 10:31:00.036425  753065 api_server.go:88] waiting for apiserver healthz status ...
	I0908 10:31:00.036443  753065 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0908 10:31:00.036678  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.307740111s)
	W0908 10:31:00.036757  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 10:31:00.036795  753065 retry.go:31] will retry after 154.744348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 10:31:00.068816  753065 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0908 10:31:00.080361  753065 api_server.go:141] control plane version: v1.34.0
	I0908 10:31:00.080399  753065 api_server.go:131] duration metric: took 43.9679ms to wait for apiserver health ...
	I0908 10:31:00.080410  753065 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 10:31:00.081086  753065 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 10:31:00.081113  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:00.081189  753065 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 10:31:00.081216  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:00.131576  753065 system_pods.go:59] 16 kube-system pods found
	I0908 10:31:00.131630  753065 system_pods.go:61] "amd-gpu-device-plugin-7clhx" [330689b3-479a-458b-84e0-3903da038130] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 10:31:00.131643  753065 system_pods.go:61] "coredns-66bc5c9577-tvgs6" [4e3144d8-541c-4996-9c58-43221e2a663d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:31:00.131662  753065 system_pods.go:61] "coredns-66bc5c9577-x6z78" [76849fda-41ac-496c-879d-caf096544344] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:31:00.131670  753065 system_pods.go:61] "etcd-addons-451875" [de917a98-adb0-4c60-86a5-b76c9d68221c] Running
	I0908 10:31:00.131677  753065 system_pods.go:61] "kube-apiserver-addons-451875" [3f4ad989-bf9d-4200-ba4e-b7230af6acd0] Running
	I0908 10:31:00.131684  753065 system_pods.go:61] "kube-controller-manager-addons-451875" [93b44128-074a-4f2a-8809-4b62bd21cc3c] Running
	I0908 10:31:00.131695  753065 system_pods.go:61] "kube-ingress-dns-minikube" [e651e5ca-fab8-4d0e-af0d-e8d0281dcb48] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 10:31:00.131701  753065 system_pods.go:61] "kube-proxy-4whd8" [317bb955-9731-4239-9266-1835fff2a8fa] Running
	I0908 10:31:00.131708  753065 system_pods.go:61] "kube-scheduler-addons-451875" [8d167f91-db8b-4094-b58a-67b75b70467c] Running
	I0908 10:31:00.131720  753065 system_pods.go:61] "metrics-server-85b7d694d7-s4lpz" [9a1e2579-44a0-42c3-84fd-567e80c96fc1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 10:31:00.131729  753065 system_pods.go:61] "nvidia-device-plugin-daemonset-w6bbw" [248f80b5-4ed1-4698-ac0d-9cd7d127bbf2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 10:31:00.131739  753065 system_pods.go:61] "registry-66898fdd98-v5x6w" [3db84b88-8a2e-45b9-9019-7c26805a646c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 10:31:00.131748  753065 system_pods.go:61] "registry-creds-764b6fb674-9rn9k" [8b3233b4-84e4-4bc6-809b-be7c22978dde] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 10:31:00.131758  753065 system_pods.go:61] "registry-proxy-58n5b" [a453df18-bf9a-4b07-9f85-b98dd83f4a43] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 10:31:00.131767  753065 system_pods.go:61] "snapshot-controller-7d9fbc56b8-j5x9x" [7ca15071-9794-4019-96a7-87c0fcc0c40c] Pending
	I0908 10:31:00.131776  753065 system_pods.go:61] "storage-provisioner" [65573278-5aba-44c7-b180-a1fd08931683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 10:31:00.131786  753065 system_pods.go:74] duration metric: took 51.36766ms to wait for pod list to return data ...
	I0908 10:31:00.131799  753065 default_sa.go:34] waiting for default service account to be created ...
	I0908 10:31:00.138724  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.138751  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.139054  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.139071  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.139088  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	W0908 10:31:00.139228  753065 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0908 10:31:00.153941  753065 default_sa.go:45] found service account: "default"
	I0908 10:31:00.153975  753065 default_sa.go:55] duration metric: took 22.167049ms for default service account to be created ...
	I0908 10:31:00.153988  753065 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 10:31:00.192142  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 10:31:00.202140  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:00.202175  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:00.202575  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:00.202633  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:00.202634  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:00.271699  753065 system_pods.go:86] 17 kube-system pods found
	I0908 10:31:00.271746  753065 system_pods.go:89] "amd-gpu-device-plugin-7clhx" [330689b3-479a-458b-84e0-3903da038130] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 10:31:00.271756  753065 system_pods.go:89] "coredns-66bc5c9577-tvgs6" [4e3144d8-541c-4996-9c58-43221e2a663d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:31:00.271765  753065 system_pods.go:89] "coredns-66bc5c9577-x6z78" [76849fda-41ac-496c-879d-caf096544344] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:31:00.271771  753065 system_pods.go:89] "etcd-addons-451875" [de917a98-adb0-4c60-86a5-b76c9d68221c] Running
	I0908 10:31:00.271782  753065 system_pods.go:89] "kube-apiserver-addons-451875" [3f4ad989-bf9d-4200-ba4e-b7230af6acd0] Running
	I0908 10:31:00.271789  753065 system_pods.go:89] "kube-controller-manager-addons-451875" [93b44128-074a-4f2a-8809-4b62bd21cc3c] Running
	I0908 10:31:00.271798  753065 system_pods.go:89] "kube-ingress-dns-minikube" [e651e5ca-fab8-4d0e-af0d-e8d0281dcb48] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 10:31:00.271807  753065 system_pods.go:89] "kube-proxy-4whd8" [317bb955-9731-4239-9266-1835fff2a8fa] Running
	I0908 10:31:00.271813  753065 system_pods.go:89] "kube-scheduler-addons-451875" [8d167f91-db8b-4094-b58a-67b75b70467c] Running
	I0908 10:31:00.271825  753065 system_pods.go:89] "metrics-server-85b7d694d7-s4lpz" [9a1e2579-44a0-42c3-84fd-567e80c96fc1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 10:31:00.271838  753065 system_pods.go:89] "nvidia-device-plugin-daemonset-w6bbw" [248f80b5-4ed1-4698-ac0d-9cd7d127bbf2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 10:31:00.271850  753065 system_pods.go:89] "registry-66898fdd98-v5x6w" [3db84b88-8a2e-45b9-9019-7c26805a646c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 10:31:00.271871  753065 system_pods.go:89] "registry-creds-764b6fb674-9rn9k" [8b3233b4-84e4-4bc6-809b-be7c22978dde] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 10:31:00.271882  753065 system_pods.go:89] "registry-proxy-58n5b" [a453df18-bf9a-4b07-9f85-b98dd83f4a43] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 10:31:00.271894  753065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-j5x9x" [7ca15071-9794-4019-96a7-87c0fcc0c40c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:31:00.271903  753065 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kqrdg" [5301e6b3-aeb7-4c63-9f80-34c0fbe45aaa] Pending
	I0908 10:31:00.271912  753065 system_pods.go:89] "storage-provisioner" [65573278-5aba-44c7-b180-a1fd08931683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 10:31:00.271926  753065 system_pods.go:126] duration metric: took 117.929461ms to wait for k8s-apps to be running ...
	I0908 10:31:00.271957  753065 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 10:31:00.272022  753065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 10:31:00.275144  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:00.566566  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:00.566703  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:01.048078  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:01.048094  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:01.055406  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.77920055s)
	I0908 10:31:01.055466  753065 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.015911734s)
	I0908 10:31:01.055470  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:01.055677  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:01.055964  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:01.056037  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:01.056051  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:01.056073  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:01.056153  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:01.056429  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:01.056445  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:01.056473  753065 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-451875"
	I0908 10:31:01.056758  753065 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 10:31:01.057666  753065 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 10:31:01.058703  753065 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 10:31:01.059461  753065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 10:31:01.059640  753065 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 10:31:01.059663  753065 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 10:31:01.080496  753065 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 10:31:01.080522  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:01.238436  753065 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 10:31:01.238472  753065 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 10:31:01.398172  753065 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 10:31:01.398198  753065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 10:31:01.512434  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 10:31:01.536438  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:01.536977  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:01.564816  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:02.037321  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:02.037579  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:02.064873  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:02.536487  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:02.536767  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:02.565062  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:03.079177  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.886985876s)
	I0908 10:31:03.079237  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:03.079252  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:03.079256  753065 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.807202023s)
	I0908 10:31:03.079298  753065 system_svc.go:56] duration metric: took 2.80733522s WaitForService to wait for kubelet
	I0908 10:31:03.079314  753065 kubeadm.go:578] duration metric: took 12.070297927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 10:31:03.079341  753065 node_conditions.go:102] verifying NodePressure condition ...
	I0908 10:31:03.079597  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:03.079597  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:03.079622  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:03.079636  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:03.079643  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:03.079971  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:03.080024  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:03.080039  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:03.090186  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:03.090242  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:03.090420  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:03.150187  753065 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 10:31:03.150217  753065 node_conditions.go:123] node cpu capacity is 2
	I0908 10:31:03.150229  753065 node_conditions.go:105] duration metric: took 70.883144ms to run NodePressure ...
	I0908 10:31:03.150253  753065 start.go:241] waiting for startup goroutines ...
	I0908 10:31:03.538612  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:03.541813  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:03.566111  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.290905571s)
	I0908 10:31:03.566150  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.053638531s)
	W0908 10:31:03.566182  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:03.566211  753065 retry.go:31] will retry after 266.849724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:03.566212  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:03.566255  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:03.566570  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:03.566587  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:03.566595  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:31:03.566595  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:03.566602  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:31:03.566855  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:31:03.566868  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:31:03.566891  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:31:03.567545  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:03.567927  753065 addons.go:479] Verifying addon gcp-auth=true in "addons-451875"
	I0908 10:31:03.569352  753065 out.go:179] * Verifying gcp-auth addon...
	I0908 10:31:03.571446  753065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 10:31:03.576223  753065 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 10:31:03.576239  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:03.833732  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:04.036519  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:04.038302  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:04.066619  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:04.136287  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:04.536554  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:04.538542  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:04.565468  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:04.574435  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:05.031581  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:05.037163  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:05.045214  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.211436441s)
	W0908 10:31:05.045267  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:05.045299  753065 retry.go:31] will retry after 412.231823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:05.063080  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:05.074496  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:05.457923  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:05.534671  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:05.536062  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:05.565088  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:05.576426  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:06.034042  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:06.034379  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:06.063543  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:06.076427  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:31:06.157061  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:06.157110  753065 retry.go:31] will retry after 831.365618ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:06.533181  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:06.534446  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:06.563533  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:06.574997  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:06.989542  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:07.035250  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:07.035332  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:07.064245  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:07.076536  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:07.534236  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:07.536065  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:07.564333  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:07.574928  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:31:07.716104  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:07.716150  753065 retry.go:31] will retry after 1.723974252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:08.035128  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:08.038207  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:08.069440  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:08.075581  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:08.535611  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:08.539990  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:08.565394  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:08.575506  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:09.041765  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:09.043625  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:09.063431  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:09.076239  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:09.440775  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:09.532946  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:09.539232  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:09.566133  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:09.577818  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:10.032025  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:10.036544  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:10.064493  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:10.076103  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:10.535084  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:10.535199  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:10.564919  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:10.578167  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:10.625102  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.184262312s)
	W0908 10:31:10.625166  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:10.625193  753065 retry.go:31] will retry after 1.819139053s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:11.034012  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:11.036489  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:11.066165  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:11.079367  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:11.532680  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:11.539057  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:11.613846  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:11.614495  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:12.033857  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:12.040731  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:12.067156  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:12.075523  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:12.445033  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:12.535766  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:12.536438  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:12.564017  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:12.574818  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:13.036137  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:13.040035  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:13.065889  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:13.075148  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:13.533809  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:13.537203  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:13.556743  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.111670809s)
	W0908 10:31:13.556782  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:13.556802  753065 retry.go:31] will retry after 2.596767737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:13.564316  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:13.575249  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:14.039235  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:14.041079  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:14.064642  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:14.073726  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:14.534064  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:14.537913  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:14.565873  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:14.575726  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:15.069687  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:15.070207  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:15.071363  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:15.078238  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:15.534432  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:15.535745  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:15.569365  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:15.575366  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:16.038512  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:16.039278  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:16.065480  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:16.086203  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:16.154054  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:16.534533  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:16.540417  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:16.568037  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:16.577170  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:17.040747  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:17.040901  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:17.067390  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:17.074342  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:17.322731  753065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.168631755s)
	W0908 10:31:17.322792  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:17.322821  753065 retry.go:31] will retry after 3.363199402s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:17.531054  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:17.535270  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:17.563187  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:17.578446  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:18.075501  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:18.077848  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:18.079457  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:18.079803  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:18.532498  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:18.536655  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:18.566035  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:18.575288  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:19.032315  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:19.034402  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:19.157976  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:19.164091  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:19.532355  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:19.536601  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:19.565921  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:19.576525  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:20.119417  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:20.126880  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:20.126905  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:20.126938  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:20.531944  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:20.534117  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:20.564019  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:20.574989  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:20.686210  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:21.032470  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:21.034267  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:21.063389  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:21.077396  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:31:21.354669  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:21.354720  753065 retry.go:31] will retry after 4.171745105s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:21.532837  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:21.533977  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:21.563281  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:21.574709  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:22.031225  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:22.033789  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:22.062975  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:22.074259  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:22.533140  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:22.534285  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:22.563637  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:22.573897  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:23.031428  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:23.033810  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:23.063227  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:23.074994  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:23.531717  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:23.533471  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:23.564022  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:23.575035  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:24.032589  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:24.034926  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:24.063724  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:24.073823  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:24.531803  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:24.535044  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:24.563458  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:24.575048  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:25.034289  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:25.036083  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:25.063666  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:25.073886  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:25.527491  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:25.531213  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:25.533414  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:25.565592  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:25.574404  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:26.033146  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:26.036235  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:26.065556  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:26.077725  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:31:26.451187  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:26.451237  753065 retry.go:31] will retry after 8.296025046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:26.535595  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:26.535820  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:26.566039  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:26.577036  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:27.033123  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:27.034466  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:27.064220  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:27.076655  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:27.531875  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:27.534933  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:27.565146  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:27.577122  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:28.034375  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:28.037316  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:28.063565  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:28.078702  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:28.537010  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:28.539718  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:28.566876  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:28.574895  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:29.038324  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:29.040200  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:29.064834  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:29.076992  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:29.535396  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:29.536657  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:29.566926  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:29.575585  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:30.032037  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:30.034023  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:30.063941  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:30.075000  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:30.535627  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:30.535774  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:30.563332  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:30.578774  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:31.034078  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:31.036504  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:31.065470  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:31.074464  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:31.532143  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:31.533797  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:31.562796  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:31.577219  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:32.032675  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:32.034961  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:32.062515  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:32.074488  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:32.533356  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:32.534574  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:32.562824  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:32.575403  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:33.032787  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:33.034805  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:33.063134  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:33.074681  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:33.531279  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:33.533388  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:33.564877  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:33.574451  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:34.033443  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:34.034636  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:34.062790  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:34.074052  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:34.533123  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:34.534369  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:34.563968  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:34.574912  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:34.748258  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:35.035969  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:35.036532  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:35.062241  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:35.075727  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:31:35.420621  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:35.420666  753065 retry.go:31] will retry after 8.965565685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:35.533639  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:35.536098  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:35.563118  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:35.575122  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:36.037903  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:36.038017  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:36.065449  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:36.076802  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:36.541380  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:36.545414  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:36.569854  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:36.579308  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:37.133138  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:37.133335  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:37.133901  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:37.134088  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:37.535654  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:37.536222  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:37.568197  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:37.577995  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:38.032863  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:38.037179  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:38.065053  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:38.075952  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:38.535782  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:38.536084  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:38.563362  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:38.575415  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:39.033948  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:39.036551  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:39.064488  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:39.078626  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:39.534499  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:39.537684  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:39.563261  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:39.577656  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:40.045249  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:40.051017  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:40.064442  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:40.075609  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:40.540579  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:40.540765  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:40.564854  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:40.577600  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:41.034570  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:41.037101  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:41.064412  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:41.075640  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:41.534606  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:41.537919  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:41.565797  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:41.576634  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:42.033204  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:42.034599  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:42.065315  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:42.076521  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:42.566544  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:42.566631  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:42.570831  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:42.576453  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:43.034466  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:43.039722  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:43.068123  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:43.077722  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:43.535067  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:43.537022  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:43.634899  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:43.634976  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:44.032403  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:44.035739  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:44.062870  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:44.074734  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:44.387177  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:31:44.535555  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:44.535820  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:44.564540  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:44.578776  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:45.033451  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:45.034962  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:45.063688  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:45.075668  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:31:45.161328  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:45.161365  753065 retry.go:31] will retry after 24.923909203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:31:45.532003  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:45.534235  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:45.566124  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:45.575183  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:46.032033  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:46.035097  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:46.062950  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:46.074036  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:46.532180  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:46.534131  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:46.563367  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:46.574972  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:47.031712  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:47.034151  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:47.062986  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:47.075314  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:47.532486  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:47.535083  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:47.564243  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:47.575580  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:48.031962  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:48.034710  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:48.062743  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:48.073673  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:48.532252  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:48.537556  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:48.567072  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:48.575689  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:49.032225  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:49.034452  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:49.064914  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:49.076165  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:49.534981  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:49.538899  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:49.572944  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:49.575473  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:50.033622  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:50.034649  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:50.062462  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:50.075959  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:50.533944  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:50.534078  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:50.563801  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:50.573796  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:51.033830  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:51.034025  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:51.064315  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:51.075146  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:51.531545  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:51.533880  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:51.564765  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:51.574496  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:52.034191  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:52.034706  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:52.063413  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:52.074985  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:52.531649  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:52.533467  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:52.563784  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:52.574537  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:53.035342  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:53.035342  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:53.064174  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:53.075033  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:53.531719  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:53.533330  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:53.563727  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:53.575628  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:54.031345  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:54.034544  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:54.064685  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:54.075301  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:54.534112  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:54.536079  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:54.564026  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:54.574772  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:55.033279  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:55.035292  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:55.064172  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:55.077339  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:55.532971  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:55.534660  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:55.562460  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:55.574687  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:56.033573  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:56.036203  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:56.064245  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:56.076686  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:56.531601  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:31:56.534637  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:56.563041  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:56.574936  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:57.032840  753065 kapi.go:107] duration metric: took 57.004963102s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 10:31:57.034446  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:57.064025  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:57.075294  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:57.534510  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:57.566279  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:57.575192  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:58.035527  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:58.064032  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:58.075410  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:58.534644  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:58.563980  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:58.574486  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:59.035108  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:59.063051  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:59.075055  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:31:59.533761  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:31:59.563939  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:31:59.574374  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:00.034280  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:00.063278  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:00.075644  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:00.534493  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:00.563795  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:00.575530  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:01.036382  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:01.064739  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:01.074211  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:01.536078  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:01.565891  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:01.575430  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:02.034638  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:02.064720  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:02.074796  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:02.535747  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:02.564811  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:02.574535  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:03.037291  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:03.066668  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:03.077845  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:03.535137  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:03.567330  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:03.576233  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:04.036572  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:04.064675  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:04.075999  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:04.535149  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:04.563382  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:04.575436  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:05.056844  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:05.062851  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:05.074545  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:05.534839  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:05.564490  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:05.575523  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:06.034708  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:06.062556  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:06.075162  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:06.534171  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:06.563110  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:06.574416  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:07.034611  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:07.064130  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:07.075304  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:07.534481  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:07.565883  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:07.574696  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:08.034497  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:08.064281  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:08.074749  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:08.535849  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:08.563787  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:08.575975  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:09.251089  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:09.251552  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:09.251725  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:09.535456  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:09.564511  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:09.575676  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:10.034493  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:10.063786  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:10.074445  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:10.085597  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:32:10.534831  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:10.565256  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:10.575771  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:32:10.775559  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:32:10.775609  753065 retry.go:31] will retry after 39.347652152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:32:11.034329  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:11.063991  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:11.075145  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:11.534144  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:11.563522  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:11.579221  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:12.033925  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:12.063746  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:12.074366  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:12.533815  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:12.562917  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:12.574629  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:13.034596  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:13.063482  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:13.075247  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:13.534529  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:13.563842  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:13.574928  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:14.035033  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:14.063497  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:14.075109  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:14.533991  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:14.564527  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:14.578285  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:15.034793  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:15.063041  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:15.074714  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:15.535791  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:15.564977  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:15.575642  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:16.042970  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:16.062855  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:16.075320  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:16.539076  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:16.566609  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:16.575509  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:17.036516  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:17.065732  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:17.077433  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:17.538533  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:17.566227  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:17.575572  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:18.037915  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:18.063676  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:18.075292  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:18.536846  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:18.566001  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:18.575435  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:19.036915  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:19.064319  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:19.076722  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:19.536732  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:19.563945  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:19.577443  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:20.040803  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:20.063161  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:20.074862  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:20.534730  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:20.563066  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:20.574845  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:21.040968  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:21.140815  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:21.141221  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:21.540758  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:21.566422  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:21.579472  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:22.039446  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:22.064889  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:22.075225  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:22.533936  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:22.563473  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:22.575616  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:23.035448  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:23.136282  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:23.136665  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:23.537298  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:23.567208  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:23.575490  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:24.039332  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:24.065833  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:24.077063  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:24.535037  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:24.567479  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:24.580375  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:25.037142  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:25.068186  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:25.076418  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:25.551248  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:25.569353  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:25.575957  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:26.401033  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:26.403444  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:26.404354  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:26.535238  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:26.565381  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:26.602711  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:27.038783  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:27.066539  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:27.075503  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:27.536273  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:27.567326  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:27.576469  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:28.036173  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:28.066896  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:28.137801  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:28.538561  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:28.568933  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:28.579122  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:29.035660  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:29.136231  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:29.136450  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:29.538954  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:29.565913  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:29.578812  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:30.039592  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:30.063312  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:30.075633  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:30.534615  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:30.563614  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:30.576457  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:31.034184  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:31.069111  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:31.078312  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:31.537988  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:31.569254  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:31.588996  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:32.035251  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:32.064947  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:32.074542  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:32.535286  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:32.565728  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:32.576315  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:33.036694  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:33.063109  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:33.077543  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:33.534567  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:33.563771  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:33.574137  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:34.034024  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:34.064913  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:34.075214  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:34.534325  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:34.564240  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:34.576783  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:35.034795  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:35.064091  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:35.074635  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:35.535064  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:35.563738  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:32:35.576518  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:36.034698  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:36.063147  753065 kapi.go:107] duration metric: took 1m35.00367621s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 10:32:36.074393  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:36.534845  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:36.635896  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:37.035539  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:37.074655  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:37.535332  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:37.575996  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:38.035217  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:38.076291  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:38.535242  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:38.575406  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:39.034380  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:39.075136  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:39.535514  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:39.575477  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:40.034120  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:40.074988  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:40.534758  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:40.574379  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:41.034463  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:41.074374  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:41.534532  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:41.575836  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:42.035103  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:42.075098  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:42.535483  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:42.576079  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:43.035115  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:43.075067  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:43.535782  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:43.574802  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:44.035899  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:44.075577  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:44.534711  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:44.575028  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:45.035912  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:45.076104  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:45.535175  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:45.575130  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:46.034675  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:46.074797  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:46.536021  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:46.574816  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:47.035474  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:47.075319  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:47.534233  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:47.635140  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:48.035246  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:48.075152  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:48.534710  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:48.574683  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:49.034624  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:49.074604  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:49.536436  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:49.576092  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:50.034132  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:50.075130  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:50.124260  753065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:32:50.534309  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:50.577312  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:32:50.792455  753065 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:32:50.792575  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:32:50.792594  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:32:50.792913  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:32:50.792938  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:32:50.792949  753065 main.go:141] libmachine: Making call to close driver server
	I0908 10:32:50.792958  753065 main.go:141] libmachine: (addons-451875) Calling .Close
	I0908 10:32:50.792989  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	I0908 10:32:50.793286  753065 main.go:141] libmachine: Successfully made call to close driver server
	I0908 10:32:50.793307  753065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 10:32:50.793327  753065 main.go:141] libmachine: (addons-451875) DBG | Closing plugin on server side
	W0908 10:32:50.793417  753065 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 10:32:51.035236  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:51.076174  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:51.535087  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:51.636294  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:52.034485  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:52.075687  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:52.534427  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:52.576221  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:53.033781  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:53.075633  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:53.535017  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:53.575318  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:54.034714  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:54.074835  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:54.534984  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:54.575248  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:55.035219  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:55.074556  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:55.535039  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:55.575185  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:56.033936  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:56.074776  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:56.536139  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:56.575572  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:57.034897  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:57.074818  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:57.534873  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:57.577486  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:58.035523  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:58.076195  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:58.534132  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:58.574932  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:59.036132  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:59.075208  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:32:59.534266  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:32:59.575594  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:00.034394  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:00.075286  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:00.534984  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:00.575030  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:01.037137  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:01.137787  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:01.535217  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:01.575921  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:02.035465  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:02.075953  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:02.535799  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:02.575415  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:03.035379  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:03.075213  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:03.534237  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:03.581047  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:04.036006  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:04.074968  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:04.534394  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:04.575380  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:05.034809  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:05.074459  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:05.534278  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:05.575062  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:06.035018  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:06.075349  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:06.536496  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:06.638500  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:07.035222  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:07.074660  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:07.534530  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:07.576579  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:08.034984  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:08.074784  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:08.535109  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:08.575072  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:09.035802  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:09.074763  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:09.534887  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:09.575325  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:10.034009  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:10.074631  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:10.534427  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:10.575793  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:11.034747  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:11.075385  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:11.544006  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:11.591126  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:12.035696  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:12.074566  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:12.534774  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:12.574579  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:13.034360  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:13.075011  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:13.534962  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:13.574903  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:14.035332  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:14.075374  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:14.535053  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:14.577932  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:15.039707  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:15.078312  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:15.540851  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:15.576973  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:16.045577  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:16.082922  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:16.536098  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:16.580677  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:17.035734  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:17.076148  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:17.534535  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:17.587512  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:18.038126  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:18.079927  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:18.535297  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:18.575589  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:19.035656  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:19.081088  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:19.537500  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:19.581699  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:20.037059  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:20.078482  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:20.534401  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:20.575559  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:21.038140  753065 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:33:21.138152  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:21.536076  753065 kapi.go:107] duration metric: took 2m21.505607966s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 10:33:21.636649  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:22.076901  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:22.575993  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:23.076379  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:23.575212  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:24.075136  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:24.574861  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:25.076135  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:25.574854  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:26.079505  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:26.575351  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:27.076219  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:27.577279  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:28.075570  753065 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:33:28.576263  753065 kapi.go:107] duration metric: took 2m25.004769505s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 10:33:28.577796  753065 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-451875 cluster.
	I0908 10:33:28.578929  753065 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 10:33:28.579952  753065 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 10:33:28.581044  753065 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, amd-gpu-device-plugin, ingress-dns, metrics-server, storage-provisioner, yakd, registry-creds, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0908 10:33:28.581965  753065 addons.go:514] duration metric: took 2m37.572877918s for enable addons: enabled=[nvidia-device-plugin cloud-spanner amd-gpu-device-plugin ingress-dns metrics-server storage-provisioner yakd registry-creds storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0908 10:33:28.582007  753065 start.go:246] waiting for cluster config update ...
	I0908 10:33:28.582034  753065 start.go:255] writing updated cluster config ...
	I0908 10:33:28.582349  753065 ssh_runner.go:195] Run: rm -f paused
	I0908 10:33:28.589203  753065 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 10:33:28.593275  753065 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tvgs6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:28.598981  753065 pod_ready.go:94] pod "coredns-66bc5c9577-tvgs6" is "Ready"
	I0908 10:33:28.599005  753065 pod_ready.go:86] duration metric: took 5.70868ms for pod "coredns-66bc5c9577-tvgs6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:28.601294  753065 pod_ready.go:83] waiting for pod "etcd-addons-451875" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:28.606402  753065 pod_ready.go:94] pod "etcd-addons-451875" is "Ready"
	I0908 10:33:28.606421  753065 pod_ready.go:86] duration metric: took 5.10669ms for pod "etcd-addons-451875" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:28.608486  753065 pod_ready.go:83] waiting for pod "kube-apiserver-addons-451875" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:28.612680  753065 pod_ready.go:94] pod "kube-apiserver-addons-451875" is "Ready"
	I0908 10:33:28.612706  753065 pod_ready.go:86] duration metric: took 4.203439ms for pod "kube-apiserver-addons-451875" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:28.614921  753065 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-451875" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:28.994698  753065 pod_ready.go:94] pod "kube-controller-manager-addons-451875" is "Ready"
	I0908 10:33:28.994738  753065 pod_ready.go:86] duration metric: took 379.795589ms for pod "kube-controller-manager-addons-451875" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:29.193644  753065 pod_ready.go:83] waiting for pod "kube-proxy-4whd8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:29.594116  753065 pod_ready.go:94] pod "kube-proxy-4whd8" is "Ready"
	I0908 10:33:29.594146  753065 pod_ready.go:86] duration metric: took 400.474501ms for pod "kube-proxy-4whd8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:29.795028  753065 pod_ready.go:83] waiting for pod "kube-scheduler-addons-451875" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:30.193631  753065 pod_ready.go:94] pod "kube-scheduler-addons-451875" is "Ready"
	I0908 10:33:30.193663  753065 pod_ready.go:86] duration metric: took 398.606075ms for pod "kube-scheduler-addons-451875" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:33:30.193674  753065 pod_ready.go:40] duration metric: took 1.604427801s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 10:33:30.235821  753065 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 10:33:30.237462  753065 out.go:179] * Done! kubectl is now configured to use "addons-451875" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.841480594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757327813841212158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a752cf07-f9b1-47a9-94ed-7a6e2fb4b8ad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.842361773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df32962c-e4f6-4492-ace1-103e3d85bbbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.842446660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df32962c-e4f6-4492-ace1-103e3d85bbbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.842788269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f0e033d6706fa0ca0fa46a36546bc6923b5d41e221b785eedb2c5cfd80f119,PodSandboxId:f3b2da45e8613ca276ca42788c25b5965710f7670f92f70397589d2db0e321a3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757327670596478859,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faee7926-dfb9-4e96-b158-707d01e57f27,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a78e55ab3bd9d821a308e46c10e18e9c4814b6612ce8815c8123b10c2ffa354,PodSandboxId:84d53d28e09afa1f54378f75f807020a8a4c4910b3f4f47f192be62ada544ac4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757327617509884369,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a8c1d3f8-0cf8-417f-84a4-d6271a60b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25709b9952ee9f2ca573d0767e7a11552ecc5d03af1ab88d9553938175f3612,PodSandboxId:39885c319fda88f11042547d5f5af19395890b7b923a20c01b417d0a4da88dc2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757327600982186594,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-fhgnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d44ee6d5-3540-44a8-8d49-16e25ba76d37,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:29d89d5927ad01c21b909a751b0d38985b01f4eefa6da7a91a31acc3b6bb0f52,PodSandboxId:41b03a009c0e2d413c80d9abb6f5507b33f781ea80a6c3caa5525677c4a12e4c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1757327543452480430,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7ntz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67c67e51-5951-423d-8e86-9ef548382fdf,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64e1a20912e75fdb604055731718db3b395c088b0c08f15d7e14f56ccfae6e9a,PodSandboxId:71031aa0a2b2ffe0c207e34e43b2e16773e22fc9505702fc96abf34e7447a057,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757327542921330843,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4x9dk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b0fd21b-e7c0-421b-9c4d-7c71724fdc0f,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80da63bc96742aef5ec6a095f79253d9b1cc0c7596ef9180813f3ce72c1cd78,PodSandboxId:ea9ae8820dc8498e9766889d53553f949392f2bf6d684053d60bf8be15bd2004,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757327538893231507,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-kl2d9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 0a6b472c-8142-4fff-abd2-077d309f569f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e887e643d345d95ee86d6bebce9f125f39877d4051c6ab1c07df7fabe67f68be,PodSandboxId:e0200ef03b98a416141e6e0ef5b217bc755d035f9190a58228262897a10f6bc5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757327503494728728,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e651e5ca-fab8-4d0e-af0d-e8d0281dcb48,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ead14a37e95a7d8615256a818ecd1399a216c03920ac3454cbc9e1a65ade67,PodSandboxId:5d43194cf45334d850ab0b0d7842406b3cbd796b89af612d63f8e2d7242c0d9b,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757327463786363172,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7clhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 330689b3-479a-458b-84e0-3903da038130,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158ceaff6160935615608748d640a630b840629b33fd990b4782846f93445ef3,PodSandboxId:19666b9554163103e09ada9874d7594ce1b9e321514770aae92c34e5e278e56c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757327459781827689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65573278-5aba-44c7-b180-a1fd08931683,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f46cf0702a955dee84b5b25ad64aaf63cb2cc6b8c9fa4645dc82804bb604857,PodSandboxId:5b159ff14ba00b627f77bb4e0ec8f27313146a3c36ea5569e5e2dc8940513034,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757327452750981647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tvgs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e3144d8-541c-4996-9c58-43221e2a663d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a9c70ee3ea7d3cc75bb09fba28c525977bf48b30c3c6cebd9ee232ded66a67,PodSandboxId:da4ec69ec2ff251b4ef55952e6ef11637e7ca06dcefbc4e1654acc6a67349d90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757327451721695599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4whd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317bb955-9731-4239-9266-1835fff2a8fa,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1f3f5abe23bc75198f4a9702cb47edf50e082001f2ef583c8fd0f10fcc94bb,PodSandboxId:c83bcda392850289539e4c3980bb827c961832773b4f57fb6215494d63542d32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757327440651754904,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3836e4b9f148abb502bdb4997c1e113d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPor
t\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc0185a543f5dcd48eb2554716bb89fd6681e881ca9e7e9c2752d7b00131d0f,PodSandboxId:b9bd974e25551f184b50126bc5dc325eda177a60fafd3e8cd5384ffc81f6cfe6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757327440595777588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de1ce46fc9752876a619cc21a41b21a7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511cd962fc02cef3d315925f8b294d3bc9dec9c5a9db8908200fa81671ac6f05,PodSandboxId:566b9d4f1c14c0bfd5b255bd745233643beafd0b1dc82c4d2ca0fefe094378e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757327440601162523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: e9821f1a638bf7f54686327baf3828e2,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33f2acee1b2199b20e01db75304327ee6ca0f8319f234f420afd8673161d922,PodSandboxId:2a0131e3d958218c3d62e2091e2594e336d1ed3cc55e33c9390c9354c93ff3fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757327440577408808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserve
r,io.kubernetes.pod.name: kube-apiserver-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6129e0cc82672ca83f7a45a74b9c219c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df32962c-e4f6-4492-ace1-103e3d85bbbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.886633771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1d41b6c-c421-4638-bbac-fddf5e3cdf7d name=/runtime.v1.RuntimeService/Version
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.886915878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1d41b6c-c421-4638-bbac-fddf5e3cdf7d name=/runtime.v1.RuntimeService/Version
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.888760445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1495c086-99d2-4c45-b4f1-016ac103b6f1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.890336457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757327813890312308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1495c086-99d2-4c45-b4f1-016ac103b6f1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.891058765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=324e9d08-219a-47ff-abc9-724ff5970b53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.891170606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=324e9d08-219a-47ff-abc9-724ff5970b53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.891541900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f0e033d6706fa0ca0fa46a36546bc6923b5d41e221b785eedb2c5cfd80f119,PodSandboxId:f3b2da45e8613ca276ca42788c25b5965710f7670f92f70397589d2db0e321a3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757327670596478859,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faee7926-dfb9-4e96-b158-707d01e57f27,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a78e55ab3bd9d821a308e46c10e18e9c4814b6612ce8815c8123b10c2ffa354,PodSandboxId:84d53d28e09afa1f54378f75f807020a8a4c4910b3f4f47f192be62ada544ac4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757327617509884369,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a8c1d3f8-0cf8-417f-84a4-d6271a60b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25709b9952ee9f2ca573d0767e7a11552ecc5d03af1ab88d9553938175f3612,PodSandboxId:39885c319fda88f11042547d5f5af19395890b7b923a20c01b417d0a4da88dc2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757327600982186594,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-fhgnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d44ee6d5-3540-44a8-8d49-16e25ba76d37,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:29d89d5927ad01c21b909a751b0d38985b01f4eefa6da7a91a31acc3b6bb0f52,PodSandboxId:41b03a009c0e2d413c80d9abb6f5507b33f781ea80a6c3caa5525677c4a12e4c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1757327543452480430,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7ntz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67c67e51-5951-423d-8e86-9ef548382fdf,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64e1a20912e75fdb604055731718db3b395c088b0c08f15d7e14f56ccfae6e9a,PodSandboxId:71031aa0a2b2ffe0c207e34e43b2e16773e22fc9505702fc96abf34e7447a057,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757327542921330843,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4x9dk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b0fd21b-e7c0-421b-9c4d-7c71724fdc0f,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80da63bc96742aef5ec6a095f79253d9b1cc0c7596ef9180813f3ce72c1cd78,PodSandboxId:ea9ae8820dc8498e9766889d53553f949392f2bf6d684053d60bf8be15bd2004,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757327538893231507,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-kl2d9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 0a6b472c-8142-4fff-abd2-077d309f569f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e887e643d345d95ee86d6bebce9f125f39877d4051c6ab1c07df7fabe67f68be,PodSandboxId:e0200ef03b98a416141e6e0ef5b217bc755d035f9190a58228262897a10f6bc5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757327503494728728,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e651e5ca-fab8-4d0e-af0d-e8d0281dcb48,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ead14a37e95a7d8615256a818ecd1399a216c03920ac3454cbc9e1a65ade67,PodSandboxId:5d43194cf45334d850ab0b0d7842406b3cbd796b89af612d63f8e2d7242c0d9b,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757327463786363172,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7clhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 330689b3-479a-458b-84e0-3903da038130,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158ceaff6160935615608748d640a630b840629b33fd990b4782846f93445ef3,PodSandboxId:19666b9554163103e09ada9874d7594ce1b9e321514770aae92c34e5e278e56c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757327459781827689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65573278-5aba-44c7-b180-a1fd08931683,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f46cf0702a955dee84b5b25ad64aaf63cb2cc6b8c9fa4645dc82804bb604857,PodSandboxId:5b159ff14ba00b627f77bb4e0ec8f27313146a3c36ea5569e5e2dc8940513034,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757327452750981647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tvgs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e3144d8-541c-4996-9c58-43221e2a663d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a9c70ee3ea7d3cc75bb09fba28c525977bf48b30c3c6cebd9ee232ded66a67,PodSandboxId:da4ec69ec2ff251b4ef55952e6ef11637e7ca06dcefbc4e1654acc6a67349d90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757327451721695599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4whd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317bb955-9731-4239-9266-1835fff2a8fa,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1f3f5abe23bc75198f4a9702cb47edf50e082001f2ef583c8fd0f10fcc94bb,PodSandboxId:c83bcda392850289539e4c3980bb827c961832773b4f57fb6215494d63542d32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757327440651754904,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3836e4b9f148abb502bdb4997c1e113d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPor
t\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc0185a543f5dcd48eb2554716bb89fd6681e881ca9e7e9c2752d7b00131d0f,PodSandboxId:b9bd974e25551f184b50126bc5dc325eda177a60fafd3e8cd5384ffc81f6cfe6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757327440595777588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de1ce46fc9752876a619cc21a41b21a7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511cd962fc02cef3d315925f8b294d3bc9dec9c5a9db8908200fa81671ac6f05,PodSandboxId:566b9d4f1c14c0bfd5b255bd745233643beafd0b1dc82c4d2ca0fefe094378e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757327440601162523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: e9821f1a638bf7f54686327baf3828e2,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33f2acee1b2199b20e01db75304327ee6ca0f8319f234f420afd8673161d922,PodSandboxId:2a0131e3d958218c3d62e2091e2594e336d1ed3cc55e33c9390c9354c93ff3fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757327440577408808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserve
r,io.kubernetes.pod.name: kube-apiserver-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6129e0cc82672ca83f7a45a74b9c219c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=324e9d08-219a-47ff-abc9-724ff5970b53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.926780170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8337027e-48e7-46ef-9e47-3691efd8ff7d name=/runtime.v1.RuntimeService/Version
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.926868530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8337027e-48e7-46ef-9e47-3691efd8ff7d name=/runtime.v1.RuntimeService/Version
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.928880474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9b5cd9a-22b0-4714-bd99-9e38a0b4c9fe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.930139505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757327813930112883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9b5cd9a-22b0-4714-bd99-9e38a0b4c9fe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.930811560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46b134b8-d337-4f22-b3e4-be5f4d8ee052 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.930869790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46b134b8-d337-4f22-b3e4-be5f4d8ee052 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.931181940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f0e033d6706fa0ca0fa46a36546bc6923b5d41e221b785eedb2c5cfd80f119,PodSandboxId:f3b2da45e8613ca276ca42788c25b5965710f7670f92f70397589d2db0e321a3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757327670596478859,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faee7926-dfb9-4e96-b158-707d01e57f27,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a78e55ab3bd9d821a308e46c10e18e9c4814b6612ce8815c8123b10c2ffa354,PodSandboxId:84d53d28e09afa1f54378f75f807020a8a4c4910b3f4f47f192be62ada544ac4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757327617509884369,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a8c1d3f8-0cf8-417f-84a4-d6271a60b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25709b9952ee9f2ca573d0767e7a11552ecc5d03af1ab88d9553938175f3612,PodSandboxId:39885c319fda88f11042547d5f5af19395890b7b923a20c01b417d0a4da88dc2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757327600982186594,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-fhgnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d44ee6d5-3540-44a8-8d49-16e25ba76d37,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:29d89d5927ad01c21b909a751b0d38985b01f4eefa6da7a91a31acc3b6bb0f52,PodSandboxId:41b03a009c0e2d413c80d9abb6f5507b33f781ea80a6c3caa5525677c4a12e4c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1757327543452480430,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7ntz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67c67e51-5951-423d-8e86-9ef548382fdf,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64e1a20912e75fdb604055731718db3b395c088b0c08f15d7e14f56ccfae6e9a,PodSandboxId:71031aa0a2b2ffe0c207e34e43b2e16773e22fc9505702fc96abf34e7447a057,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757327542921330843,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4x9dk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b0fd21b-e7c0-421b-9c4d-7c71724fdc0f,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80da63bc96742aef5ec6a095f79253d9b1cc0c7596ef9180813f3ce72c1cd78,PodSandboxId:ea9ae8820dc8498e9766889d53553f949392f2bf6d684053d60bf8be15bd2004,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757327538893231507,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-kl2d9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 0a6b472c-8142-4fff-abd2-077d309f569f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e887e643d345d95ee86d6bebce9f125f39877d4051c6ab1c07df7fabe67f68be,PodSandboxId:e0200ef03b98a416141e6e0ef5b217bc755d035f9190a58228262897a10f6bc5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757327503494728728,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e651e5ca-fab8-4d0e-af0d-e8d0281dcb48,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ead14a37e95a7d8615256a818ecd1399a216c03920ac3454cbc9e1a65ade67,PodSandboxId:5d43194cf45334d850ab0b0d7842406b3cbd796b89af612d63f8e2d7242c0d9b,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757327463786363172,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7clhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 330689b3-479a-458b-84e0-3903da038130,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158ceaff6160935615608748d640a630b840629b33fd990b4782846f93445ef3,PodSandboxId:19666b9554163103e09ada9874d7594ce1b9e321514770aae92c34e5e278e56c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757327459781827689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65573278-5aba-44c7-b180-a1fd08931683,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f46cf0702a955dee84b5b25ad64aaf63cb2cc6b8c9fa4645dc82804bb604857,PodSandboxId:5b159ff14ba00b627f77bb4e0ec8f27313146a3c36ea5569e5e2dc8940513034,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757327452750981647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tvgs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e3144d8-541c-4996-9c58-43221e2a663d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a9c70ee3ea7d3cc75bb09fba28c525977bf48b30c3c6cebd9ee232ded66a67,PodSandboxId:da4ec69ec2ff251b4ef55952e6ef11637e7ca06dcefbc4e1654acc6a67349d90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757327451721695599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4whd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317bb955-9731-4239-9266-1835fff2a8fa,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1f3f5abe23bc75198f4a9702cb47edf50e082001f2ef583c8fd0f10fcc94bb,PodSandboxId:c83bcda392850289539e4c3980bb827c961832773b4f57fb6215494d63542d32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757327440651754904,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3836e4b9f148abb502bdb4997c1e113d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPor
t\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc0185a543f5dcd48eb2554716bb89fd6681e881ca9e7e9c2752d7b00131d0f,PodSandboxId:b9bd974e25551f184b50126bc5dc325eda177a60fafd3e8cd5384ffc81f6cfe6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757327440595777588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de1ce46fc9752876a619cc21a41b21a7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511cd962fc02cef3d315925f8b294d3bc9dec9c5a9db8908200fa81671ac6f05,PodSandboxId:566b9d4f1c14c0bfd5b255bd745233643beafd0b1dc82c4d2ca0fefe094378e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757327440601162523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: e9821f1a638bf7f54686327baf3828e2,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33f2acee1b2199b20e01db75304327ee6ca0f8319f234f420afd8673161d922,PodSandboxId:2a0131e3d958218c3d62e2091e2594e336d1ed3cc55e33c9390c9354c93ff3fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757327440577408808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserve
r,io.kubernetes.pod.name: kube-apiserver-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6129e0cc82672ca83f7a45a74b9c219c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46b134b8-d337-4f22-b3e4-be5f4d8ee052 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.973500894Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f5f9ee4-64c9-4739-af2d-c22aace66e4e name=/runtime.v1.RuntimeService/Version
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.973623160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f5f9ee4-64c9-4739-af2d-c22aace66e4e name=/runtime.v1.RuntimeService/Version
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.974814003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a56063d7-0084-4936-a167-055711524342 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.976093550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757327813976068720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a56063d7-0084-4936-a167-055711524342 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.976819713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef95c78f-1d91-4107-b938-4061c0eb323c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.976913836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef95c78f-1d91-4107-b938-4061c0eb323c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:36:53 addons-451875 crio[826]: time="2025-09-08 10:36:53.977424424Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08f0e033d6706fa0ca0fa46a36546bc6923b5d41e221b785eedb2c5cfd80f119,PodSandboxId:f3b2da45e8613ca276ca42788c25b5965710f7670f92f70397589d2db0e321a3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757327670596478859,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: faee7926-dfb9-4e96-b158-707d01e57f27,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a78e55ab3bd9d821a308e46c10e18e9c4814b6612ce8815c8123b10c2ffa354,PodSandboxId:84d53d28e09afa1f54378f75f807020a8a4c4910b3f4f47f192be62ada544ac4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757327617509884369,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a8c1d3f8-0cf8-417f-84a4-d6271a60b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25709b9952ee9f2ca573d0767e7a11552ecc5d03af1ab88d9553938175f3612,PodSandboxId:39885c319fda88f11042547d5f5af19395890b7b923a20c01b417d0a4da88dc2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757327600982186594,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-fhgnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d44ee6d5-3540-44a8-8d49-16e25ba76d37,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:29d89d5927ad01c21b909a751b0d38985b01f4eefa6da7a91a31acc3b6bb0f52,PodSandboxId:41b03a009c0e2d413c80d9abb6f5507b33f781ea80a6c3caa5525677c4a12e4c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1757327543452480430,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7ntz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67c67e51-5951-423d-8e86-9ef548382fdf,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64e1a20912e75fdb604055731718db3b395c088b0c08f15d7e14f56ccfae6e9a,PodSandboxId:71031aa0a2b2ffe0c207e34e43b2e16773e22fc9505702fc96abf34e7447a057,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757327542921330843,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4x9dk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b0fd21b-e7c0-421b-9c4d-7c71724fdc0f,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80da63bc96742aef5ec6a095f79253d9b1cc0c7596ef9180813f3ce72c1cd78,PodSandboxId:ea9ae8820dc8498e9766889d53553f949392f2bf6d684053d60bf8be15bd2004,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757327538893231507,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-kl2d9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 0a6b472c-8142-4fff-abd2-077d309f569f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e887e643d345d95ee86d6bebce9f125f39877d4051c6ab1c07df7fabe67f68be,PodSandboxId:e0200ef03b98a416141e6e0ef5b217bc755d035f9190a58228262897a10f6bc5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757327503494728728,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e651e5ca-fab8-4d0e-af0d-e8d0281dcb48,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ead14a37e95a7d8615256a818ecd1399a216c03920ac3454cbc9e1a65ade67,PodSandboxId:5d43194cf45334d850ab0b0d7842406b3cbd796b89af612d63f8e2d7242c0d9b,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757327463786363172,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7clhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 330689b3-479a-458b-84e0-3903da038130,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158ceaff6160935615608748d640a630b840629b33fd990b4782846f93445ef3,PodSandboxId:19666b9554163103e09ada9874d7594ce1b9e321514770aae92c34e5e278e56c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757327459781827689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65573278-5aba-44c7-b180-a1fd08931683,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f46cf0702a955dee84b5b25ad64aaf63cb2cc6b8c9fa4645dc82804bb604857,PodSandboxId:5b159ff14ba00b627f77bb4e0ec8f27313146a3c36ea5569e5e2dc8940513034,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757327452750981647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tvgs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e3144d8-541c-4996-9c58-43221e2a663d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a9c70ee3ea7d3cc75bb09fba28c525977bf48b30c3c6cebd9ee232ded66a67,PodSandboxId:da4ec69ec2ff251b4ef55952e6ef11637e7ca06dcefbc4e1654acc6a67349d90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757327451721695599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4whd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317bb955-9731-4239-9266-1835fff2a8fa,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1f3f5abe23bc75198f4a9702cb47edf50e082001f2ef583c8fd0f10fcc94bb,PodSandboxId:c83bcda392850289539e4c3980bb827c961832773b4f57fb6215494d63542d32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757327440651754904,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3836e4b9f148abb502bdb4997c1e113d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPor
t\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc0185a543f5dcd48eb2554716bb89fd6681e881ca9e7e9c2752d7b00131d0f,PodSandboxId:b9bd974e25551f184b50126bc5dc325eda177a60fafd3e8cd5384ffc81f6cfe6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757327440595777588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de1ce46fc9752876a619cc21a41b21a7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511cd962fc02cef3d315925f8b294d3bc9dec9c5a9db8908200fa81671ac6f05,PodSandboxId:566b9d4f1c14c0bfd5b255bd745233643beafd0b1dc82c4d2ca0fefe094378e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757327440601162523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: e9821f1a638bf7f54686327baf3828e2,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e33f2acee1b2199b20e01db75304327ee6ca0f8319f234f420afd8673161d922,PodSandboxId:2a0131e3d958218c3d62e2091e2594e336d1ed3cc55e33c9390c9354c93ff3fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757327440577408808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserve
r,io.kubernetes.pod.name: kube-apiserver-addons-451875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6129e0cc82672ca83f7a45a74b9c219c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef95c78f-1d91-4107-b938-4061c0eb323c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08f0e033d6706       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   f3b2da45e8613       nginx
	4a78e55ab3bd9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   84d53d28e09af       busybox
	e25709b9952ee       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   39885c319fda8       ingress-nginx-controller-9cc49f96f-fhgnr
	29d89d5927ad0       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             4 minutes ago       Exited              patch                     1                   41b03a009c0e2       ingress-nginx-admission-patch-d7ntz
	64e1a20912e75       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   71031aa0a2b2f       ingress-nginx-admission-create-4x9dk
	a80da63bc9674       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            4 minutes ago       Running             gadget                    0                   ea9ae8820dc84       gadget-kl2d9
	e887e643d345d       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               5 minutes ago       Running             minikube-ingress-dns      0                   e0200ef03b98a       kube-ingress-dns-minikube
	a9ead14a37e95       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   5d43194cf4533       amd-gpu-device-plugin-7clhx
	158ceaff61609       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   19666b9554163       storage-provisioner
	9f46cf0702a95       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             6 minutes ago       Running             coredns                   0                   5b159ff14ba00       coredns-66bc5c9577-tvgs6
	61a9c70ee3ea7       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             6 minutes ago       Running             kube-proxy                0                   da4ec69ec2ff2       kube-proxy-4whd8
	3b1f3f5abe23b       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             6 minutes ago       Running             kube-scheduler            0                   c83bcda392850       kube-scheduler-addons-451875
	511cd962fc02c       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             6 minutes ago       Running             kube-controller-manager   0                   566b9d4f1c14c       kube-controller-manager-addons-451875
	7cc0185a543f5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             6 minutes ago       Running             etcd                      0                   b9bd974e25551       etcd-addons-451875
	e33f2acee1b21       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             6 minutes ago       Running             kube-apiserver            0                   2a0131e3d9582       kube-apiserver-addons-451875
	
	
	==> coredns [9f46cf0702a955dee84b5b25ad64aaf63cb2cc6b8c9fa4645dc82804bb604857] <==
	[INFO] 10.244.0.8:49176 - 45194 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000590504s
	[INFO] 10.244.0.8:49176 - 28469 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000219948s
	[INFO] 10.244.0.8:49176 - 11749 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000152567s
	[INFO] 10.244.0.8:49176 - 40149 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000097602s
	[INFO] 10.244.0.8:49176 - 34745 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000167459s
	[INFO] 10.244.0.8:49176 - 60969 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000252382s
	[INFO] 10.244.0.8:49176 - 29570 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00016212s
	[INFO] 10.244.0.8:57287 - 35464 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00020987s
	[INFO] 10.244.0.8:57287 - 35120 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000318606s
	[INFO] 10.244.0.8:55320 - 52459 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102392s
	[INFO] 10.244.0.8:55320 - 52013 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000379444s
	[INFO] 10.244.0.8:54775 - 29623 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180863s
	[INFO] 10.244.0.8:54775 - 29438 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000336793s
	[INFO] 10.244.0.8:55101 - 3978 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000164209s
	[INFO] 10.244.0.8:55101 - 3803 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000231785s
	[INFO] 10.244.0.23:43690 - 41310 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000495378s
	[INFO] 10.244.0.23:39467 - 16950 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000144385s
	[INFO] 10.244.0.23:37951 - 2409 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096969s
	[INFO] 10.244.0.23:33314 - 58081 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000172937s
	[INFO] 10.244.0.23:46970 - 49303 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000080795s
	[INFO] 10.244.0.23:59631 - 65465 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079536s
	[INFO] 10.244.0.23:36674 - 56469 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.003321751s
	[INFO] 10.244.0.23:59815 - 64751 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003554965s
	[INFO] 10.244.0.27:45546 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00115478s
	[INFO] 10.244.0.27:35672 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165421s
	
	
	==> describe nodes <==
	Name:               addons-451875
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-451875
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2
	                    minikube.k8s.io/name=addons-451875
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T10_30_46_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-451875
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 10:30:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-451875
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 10:36:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 10:34:50 +0000   Mon, 08 Sep 2025 10:30:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 10:34:50 +0000   Mon, 08 Sep 2025 10:30:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 10:34:50 +0000   Mon, 08 Sep 2025 10:30:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 10:34:50 +0000   Mon, 08 Sep 2025 10:30:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    addons-451875
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a91a3e379f345c3ad40599fdc98bb2b
	  System UUID:                3a91a3e3-79f3-45c3-ad40-599fdc98bb2b
	  Boot ID:                    e57d5d08-3383-4bd4-bf1f-9f8354331210
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  default                     hello-world-app-5d498dc89-prnxz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  gadget                      gadget-kl2d9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-fhgnr    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m55s
	  kube-system                 amd-gpu-device-plugin-7clhx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 coredns-66bc5c9577-tvgs6                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m3s
	  kube-system                 etcd-addons-451875                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m9s
	  kube-system                 kube-apiserver-addons-451875                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-controller-manager-addons-451875       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-4whd8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-addons-451875                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 6m1s  kube-proxy       
	  Normal  Starting                 6m9s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m9s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m9s  kubelet          Node addons-451875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s  kubelet          Node addons-451875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s  kubelet          Node addons-451875 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m8s  kubelet          Node addons-451875 status is now: NodeReady
	  Normal  RegisteredNode           6m4s  node-controller  Node addons-451875 event: Registered Node addons-451875 in Controller
	
	
	==> dmesg <==
	[  +7.361663] kauditd_printk_skb: 11 callbacks suppressed
	[Sep 8 10:32] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.160724] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.870588] kauditd_printk_skb: 5 callbacks suppressed
	[  +3.921280] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.383268] kauditd_printk_skb: 121 callbacks suppressed
	[  +2.743204] kauditd_printk_skb: 121 callbacks suppressed
	[Sep 8 10:33] kauditd_printk_skb: 20 callbacks suppressed
	[  +1.022244] kauditd_printk_skb: 50 callbacks suppressed
	[  +8.353638] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.111631] kauditd_printk_skb: 47 callbacks suppressed
	[ +13.947685] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.103462] kauditd_printk_skb: 22 callbacks suppressed
	[Sep 8 10:34] kauditd_printk_skb: 38 callbacks suppressed
	[  +3.865840] kauditd_printk_skb: 114 callbacks suppressed
	[  +2.592702] kauditd_printk_skb: 64 callbacks suppressed
	[  +2.007244] kauditd_printk_skb: 119 callbacks suppressed
	[  +3.794546] kauditd_printk_skb: 116 callbacks suppressed
	[  +2.064083] kauditd_printk_skb: 96 callbacks suppressed
	[ +10.287204] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.395273] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.233840] kauditd_printk_skb: 10 callbacks suppressed
	[Sep 8 10:35] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.886734] kauditd_printk_skb: 41 callbacks suppressed
	[Sep 8 10:36] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [7cc0185a543f5dcd48eb2554716bb89fd6681e881ca9e7e9c2752d7b00131d0f] <==
	{"level":"warn","ts":"2025-09-08T10:32:03.925295Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.006964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:32:03.925938Z","caller":"traceutil/trace.go:172","msg":"trace[381001552] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:993; }","duration":"115.579029ms","start":"2025-09-08T10:32:03.810348Z","end":"2025-09-08T10:32:03.925927Z","steps":["trace[381001552] 'agreement among raft nodes before linearized reading'  (duration: 113.978779ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:32:09.243221Z","caller":"traceutil/trace.go:172","msg":"trace[1430826418] linearizableReadLoop","detail":"{readStateIndex:1038; appliedIndex:1038; }","duration":"213.430336ms","start":"2025-09-08T10:32:09.029771Z","end":"2025-09-08T10:32:09.243202Z","steps":["trace[1430826418] 'read index received'  (duration: 213.423086ms)","trace[1430826418] 'applied index is now lower than readState.Index'  (duration: 6.306µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T10:32:09.243552Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.631373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:32:09.243601Z","caller":"traceutil/trace.go:172","msg":"trace[1390811364] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1006; }","duration":"185.698577ms","start":"2025-09-08T10:32:09.057896Z","end":"2025-09-08T10:32:09.243594Z","steps":["trace[1390811364] 'agreement among raft nodes before linearized reading'  (duration: 185.609829ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:32:09.243621Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.760338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:32:09.243646Z","caller":"traceutil/trace.go:172","msg":"trace[145364046] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1006; }","duration":"213.872237ms","start":"2025-09-08T10:32:09.029767Z","end":"2025-09-08T10:32:09.243639Z","steps":["trace[145364046] 'agreement among raft nodes before linearized reading'  (duration: 213.668372ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:32:09.243726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.747856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:32:09.243739Z","caller":"traceutil/trace.go:172","msg":"trace[1628586601] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1006; }","duration":"173.761779ms","start":"2025-09-08T10:32:09.069973Z","end":"2025-09-08T10:32:09.243735Z","steps":["trace[1628586601] 'agreement among raft nodes before linearized reading'  (duration: 173.737966ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:32:09.243369Z","caller":"traceutil/trace.go:172","msg":"trace[507188163] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"251.763166ms","start":"2025-09-08T10:32:08.991596Z","end":"2025-09-08T10:32:09.243359Z","steps":["trace[507188163] 'process raft request'  (duration: 251.623244ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:32:26.394584Z","caller":"traceutil/trace.go:172","msg":"trace[1944605958] linearizableReadLoop","detail":"{readStateIndex:1115; appliedIndex:1115; }","duration":"365.999791ms","start":"2025-09-08T10:32:26.028569Z","end":"2025-09-08T10:32:26.394569Z","steps":["trace[1944605958] 'read index received'  (duration: 365.993473ms)","trace[1944605958] 'applied index is now lower than readState.Index'  (duration: 5.483µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T10:32:26.394731Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"366.135163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:32:26.394751Z","caller":"traceutil/trace.go:172","msg":"trace[1507728930] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1078; }","duration":"366.179913ms","start":"2025-09-08T10:32:26.028565Z","end":"2025-09-08T10:32:26.394745Z","steps":["trace[1507728930] 'agreement among raft nodes before linearized reading'  (duration: 366.104738ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:32:26.394769Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T10:32:26.028552Z","time spent":"366.213664ms","remote":"127.0.0.1:42234","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-09-08T10:32:26.395826Z","caller":"traceutil/trace.go:172","msg":"trace[1145881136] transaction","detail":"{read_only:false; response_revision:1079; number_of_response:1; }","duration":"400.606338ms","start":"2025-09-08T10:32:25.995202Z","end":"2025-09-08T10:32:26.395808Z","steps":["trace[1145881136] 'process raft request'  (duration: 399.801121ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:32:26.395949Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T10:32:25.995189Z","time spent":"400.681061ms","remote":"127.0.0.1:42386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1037 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2025-09-08T10:32:26.396212Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"326.256456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:32:26.396232Z","caller":"traceutil/trace.go:172","msg":"trace[971042475] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"326.278436ms","start":"2025-09-08T10:32:26.069948Z","end":"2025-09-08T10:32:26.396227Z","steps":["trace[971042475] 'agreement among raft nodes before linearized reading'  (duration: 326.247215ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:32:26.396323Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T10:32:26.069931Z","time spent":"326.376817ms","remote":"127.0.0.1:42234","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T10:32:26.397004Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"338.186552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:32:26.397326Z","caller":"traceutil/trace.go:172","msg":"trace[1456748419] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"338.506338ms","start":"2025-09-08T10:32:26.058806Z","end":"2025-09-08T10:32:26.397312Z","steps":["trace[1456748419] 'agreement among raft nodes before linearized reading'  (duration: 337.037939ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:32:26.397532Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T10:32:26.058792Z","time spent":"338.72753ms","remote":"127.0.0.1:42234","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-09-08T10:33:08.876607Z","caller":"traceutil/trace.go:172","msg":"trace[1391212234] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"228.040542ms","start":"2025-09-08T10:33:08.648542Z","end":"2025-09-08T10:33:08.876582Z","steps":["trace[1391212234] 'process raft request'  (duration: 227.924686ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:33:51.743790Z","caller":"traceutil/trace.go:172","msg":"trace[631755143] transaction","detail":"{read_only:false; response_revision:1346; number_of_response:1; }","duration":"104.184723ms","start":"2025-09-08T10:33:51.639584Z","end":"2025-09-08T10:33:51.743768Z","steps":["trace[631755143] 'process raft request'  (duration: 104.043174ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:35:18.930671Z","caller":"traceutil/trace.go:172","msg":"trace[185379498] transaction","detail":"{read_only:false; response_revision:1843; number_of_response:1; }","duration":"251.56012ms","start":"2025-09-08T10:35:18.679085Z","end":"2025-09-08T10:35:18.930645Z","steps":["trace[185379498] 'process raft request'  (duration: 251.484794ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:36:54 up 6 min,  0 users,  load average: 0.57, 1.10, 0.65
	Linux addons-451875 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e33f2acee1b2199b20e01db75304327ee6ca0f8319f234f420afd8673161d922] <==
	I0908 10:33:54.485404       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.127.110"}
	I0908 10:34:13.252965       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:34:21.769507       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 10:34:22.030506       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.205.108"}
	I0908 10:34:31.202145       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:34:34.640480       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0908 10:34:43.002720       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0908 10:34:51.267725       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0908 10:35:14.573341       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:35:20.467114       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:35:20.467192       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 10:35:20.503985       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:35:20.504095       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 10:35:20.523992       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:35:20.524044       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 10:35:20.537097       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:35:20.537180       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 10:35:20.562311       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:35:20.562359       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0908 10:35:21.526479       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0908 10:35:21.562658       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0908 10:35:21.584013       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0908 10:35:57.427448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:36:24.840179       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:36:52.720507       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.197.216"}
	
	
	==> kube-controller-manager [511cd962fc02cef3d315925f8b294d3bc9dec9c5a9db8908200fa81671ac6f05] <==
	E0908 10:35:25.596673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:35:29.283434       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:35:29.284435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:35:30.744387       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:35:30.745404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:35:31.081339       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:35:31.082343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:35:39.863626       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:35:39.864830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:35:41.412959       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:35:41.413951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:35:42.689405       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:35:42.690523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:35:59.442956       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:35:59.444561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:36:00.918952       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:36:00.922195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:36:02.342096       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:36:02.343124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:36:31.439491       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:36:31.440774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:36:39.041550       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:36:39.042747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:36:50.481992       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:36:50.483017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [61a9c70ee3ea7d3cc75bb09fba28c525977bf48b30c3c6cebd9ee232ded66a67] <==
	I0908 10:30:52.301411       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 10:30:52.402214       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 10:30:52.402338       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.92"]
	E0908 10:30:52.402432       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 10:30:52.732852       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 10:30:52.733594       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 10:30:52.733800       1 server_linux.go:132] "Using iptables Proxier"
	I0908 10:30:52.807336       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 10:30:52.808485       1 server.go:527] "Version info" version="v1.34.0"
	I0908 10:30:52.808520       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:30:52.850727       1 config.go:106] "Starting endpoint slice config controller"
	I0908 10:30:52.853649       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 10:30:52.853847       1 config.go:200] "Starting service config controller"
	I0908 10:30:52.853879       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 10:30:52.854029       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 10:30:52.854055       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 10:30:52.874215       1 config.go:309] "Starting node config controller"
	I0908 10:30:52.874271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 10:30:52.956172       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 10:30:52.965136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 10:30:52.965178       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 10:30:52.975484       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3b1f3f5abe23bc75198f4a9702cb47edf50e082001f2ef583c8fd0f10fcc94bb] <==
	E0908 10:30:43.126589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 10:30:43.132782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 10:30:43.132866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 10:30:43.132881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 10:30:43.133029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 10:30:43.133137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 10:30:43.133158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 10:30:43.133216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 10:30:43.133311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 10:30:43.133491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 10:30:43.133513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 10:30:43.133571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 10:30:43.950957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 10:30:43.973488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 10:30:43.993395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 10:30:44.110594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 10:30:44.166736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 10:30:44.294770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 10:30:44.308880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 10:30:44.364800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 10:30:44.369330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 10:30:44.394528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 10:30:44.427061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 10:30:44.686751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0908 10:30:47.802123       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 10:35:23 addons-451875 kubelet[1498]: I0908 10:35:23.653716    1498 scope.go:117] "RemoveContainer" containerID="09fb7a29e64bedefb6126f8be3f7bb8e4b10d0f85c84580dbf375c69b6fbb80c"
	Sep 08 10:35:23 addons-451875 kubelet[1498]: I0908 10:35:23.654183    1498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09fb7a29e64bedefb6126f8be3f7bb8e4b10d0f85c84580dbf375c69b6fbb80c"} err="failed to get container status \"09fb7a29e64bedefb6126f8be3f7bb8e4b10d0f85c84580dbf375c69b6fbb80c\": rpc error: code = NotFound desc = could not find container \"09fb7a29e64bedefb6126f8be3f7bb8e4b10d0f85c84580dbf375c69b6fbb80c\": container with ID starting with 09fb7a29e64bedefb6126f8be3f7bb8e4b10d0f85c84580dbf375c69b6fbb80c not found: ID does not exist"
	Sep 08 10:35:23 addons-451875 kubelet[1498]: I0908 10:35:23.654217    1498 scope.go:117] "RemoveContainer" containerID="1f7283b1066671b43e9517317cee62b328da75c1dca5ccc945e1b23645b1b6cb"
	Sep 08 10:35:23 addons-451875 kubelet[1498]: I0908 10:35:23.654830    1498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f7283b1066671b43e9517317cee62b328da75c1dca5ccc945e1b23645b1b6cb"} err="failed to get container status \"1f7283b1066671b43e9517317cee62b328da75c1dca5ccc945e1b23645b1b6cb\": rpc error: code = NotFound desc = could not find container \"1f7283b1066671b43e9517317cee62b328da75c1dca5ccc945e1b23645b1b6cb\": container with ID starting with 1f7283b1066671b43e9517317cee62b328da75c1dca5ccc945e1b23645b1b6cb not found: ID does not exist"
	Sep 08 10:35:26 addons-451875 kubelet[1498]: E0908 10:35:26.009431    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327726008923740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:35:26 addons-451875 kubelet[1498]: E0908 10:35:26.009459    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327726008923740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:35:36 addons-451875 kubelet[1498]: E0908 10:35:36.012944    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327736012577970  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:35:36 addons-451875 kubelet[1498]: E0908 10:35:36.012967    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327736012577970  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:35:46 addons-451875 kubelet[1498]: E0908 10:35:46.015878    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327746015384440  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:35:46 addons-451875 kubelet[1498]: E0908 10:35:46.015925    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327746015384440  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:35:56 addons-451875 kubelet[1498]: E0908 10:35:56.018992    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327756018491693  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:35:56 addons-451875 kubelet[1498]: E0908 10:35:56.019217    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327756018491693  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:06 addons-451875 kubelet[1498]: E0908 10:36:06.022628    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327766022274792  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:06 addons-451875 kubelet[1498]: E0908 10:36:06.022658    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327766022274792  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:12 addons-451875 kubelet[1498]: I0908 10:36:12.559407    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 10:36:14 addons-451875 kubelet[1498]: I0908 10:36:14.559154    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7clhx" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 10:36:16 addons-451875 kubelet[1498]: E0908 10:36:16.025053    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327776024656048  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:16 addons-451875 kubelet[1498]: E0908 10:36:16.025100    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327776024656048  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:26 addons-451875 kubelet[1498]: E0908 10:36:26.027931    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327786027406153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:26 addons-451875 kubelet[1498]: E0908 10:36:26.027955    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327786027406153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:36 addons-451875 kubelet[1498]: E0908 10:36:36.031072    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327796030576360  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:36 addons-451875 kubelet[1498]: E0908 10:36:36.031095    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327796030576360  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:46 addons-451875 kubelet[1498]: E0908 10:36:46.034111    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327806033686640  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:46 addons-451875 kubelet[1498]: E0908 10:36:46.034134    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327806033686640  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 10:36:52 addons-451875 kubelet[1498]: I0908 10:36:52.678669    1498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9tcg\" (UniqueName: \"kubernetes.io/projected/9815fcdd-ce33-4d02-a610-3ba927e829fe-kube-api-access-q9tcg\") pod \"hello-world-app-5d498dc89-prnxz\" (UID: \"9815fcdd-ce33-4d02-a610-3ba927e829fe\") " pod="default/hello-world-app-5d498dc89-prnxz"
	
	
	==> storage-provisioner [158ceaff6160935615608748d640a630b840629b33fd990b4782846f93445ef3] <==
	W0908 10:36:29.303215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:31.307607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:31.316341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:33.320180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:33.325822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:35.329918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:35.335466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:37.338998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:37.346408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:39.350300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:39.355785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:41.359229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:41.365157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:43.368351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:43.373579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:45.377492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:45.382651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:47.385996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:47.391158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:49.393972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:49.401227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:51.405621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:51.413372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:53.417765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:36:53.425550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-451875 -n addons-451875
helpers_test.go:269: (dbg) Run:  kubectl --context addons-451875 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-prnxz ingress-nginx-admission-create-4x9dk ingress-nginx-admission-patch-d7ntz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-451875 describe pod hello-world-app-5d498dc89-prnxz ingress-nginx-admission-create-4x9dk ingress-nginx-admission-patch-d7ntz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-451875 describe pod hello-world-app-5d498dc89-prnxz ingress-nginx-admission-create-4x9dk ingress-nginx-admission-patch-d7ntz: exit status 1 (65.510213ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-prnxz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-451875/192.168.39.92
	Start Time:       Mon, 08 Sep 2025 10:36:52 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9tcg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q9tcg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-prnxz to addons-451875
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4x9dk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d7ntz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-451875 describe pod hello-world-app-5d498dc89-prnxz ingress-nginx-admission-create-4x9dk ingress-nginx-admission-patch-d7ntz: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-451875 addons disable ingress --alsologtostderr -v=1: (7.811457229s)
--- FAIL: TestAddons/parallel/Ingress (162.56s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (401.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e924b08f-e5ee-4dce-a376-2ad37c5552fb] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004403133s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-461050 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-461050 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-461050 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-461050 apply -f testdata/storage-provisioner/pod.yaml
I0908 10:44:04.168081  752332 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a7281b26-8c92-4b7c-a343-2633599e355f] Pending
helpers_test.go:352: "sp-pod" [a7281b26-8c92-4b7c-a343-2633599e355f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a7281b26-8c92-4b7c-a343-2633599e355f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 33.004848153s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-461050 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-461050 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-461050 apply -f testdata/storage-provisioner/pod.yaml
I0908 10:44:37.991943  752332 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4d3dbe5d-2821-433e-b310-606725bae985] Pending
helpers_test.go:352: "sp-pod" [4d3dbe5d-2821-433e-b310-606725bae985] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-461050 -n functional-461050
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-08 10:50:38.276443977 +0000 UTC m=+1290.238847161
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-461050 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-461050 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-461050/192.168.39.94
Start Time:       Mon, 08 Sep 2025 10:44:37 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.14
IPs:
IP:  10.244.0.14
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cjrnx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-cjrnx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-461050
Warning  Failed     3m50s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    113s (x4 over 6m)    kubelet            Pulling image "docker.io/nginx"
Warning  Failed     45s (x3 over 4m58s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     45s (x4 over 4m58s)  kubelet            Error: ErrImagePull
Normal   BackOff    8s (x8 over 4m57s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     8s (x8 over 4m57s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-461050 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-461050 logs sp-pod -n default: exit status 1 (90.37937ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-461050 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-461050 -n functional-461050
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 logs -n 25: (1.512967082s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-461050 ssh findmnt -T /mount3                                                                                                                     │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ mount          │ -p functional-461050 --kill=true                                                                                                                             │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │                     │
	│ image          │ functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr                                                                │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr                                                                │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr                                                                │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image save kicbase/echo-server:functional-461050 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image rm kicbase/echo-server:functional-461050 --alsologtostderr                                                                           │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image save --daemon kicbase/echo-server:functional-461050 --alsologtostderr                                                                │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ update-context │ functional-461050 update-context --alsologtostderr -v=2                                                                                                      │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ update-context │ functional-461050 update-context --alsologtostderr -v=2                                                                                                      │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ update-context │ functional-461050 update-context --alsologtostderr -v=2                                                                                                      │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls --format short --alsologtostderr                                                                                                  │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls --format yaml --alsologtostderr                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ ssh            │ functional-461050 ssh pgrep buildkitd                                                                                                                        │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │                     │
	│ image          │ functional-461050 image build -t localhost/my-image:functional-461050 testdata/build --alsologtostderr                                                       │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls --format json --alsologtostderr                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls --format table --alsologtostderr                                                                                                  │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ service        │ functional-461050 service hello-node-connect --url                                                                                                           │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:43:58
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:43:58.371071  760330 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:43:58.371181  760330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:58.371193  760330 out.go:374] Setting ErrFile to fd 2...
	I0908 10:43:58.371198  760330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:58.371423  760330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 10:43:58.372002  760330 out.go:368] Setting JSON to false
	I0908 10:43:58.372982  760330 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":69954,"bootTime":1757258284,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:43:58.373039  760330 start.go:140] virtualization: kvm guest
	I0908 10:43:58.374661  760330 out.go:179] * [functional-461050] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:43:58.375796  760330 notify.go:220] Checking for updates...
	I0908 10:43:58.375807  760330 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 10:43:58.377550  760330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:43:58.378731  760330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 10:43:58.379884  760330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 10:43:58.380865  760330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 10:43:58.381911  760330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 10:43:58.383534  760330 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:43:58.384061  760330 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:43:58.384148  760330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:43:58.400239  760330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0908 10:43:58.400753  760330 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:43:58.401322  760330 main.go:141] libmachine: Using API Version  1
	I0908 10:43:58.401352  760330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:43:58.401671  760330 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:43:58.401886  760330 main.go:141] libmachine: (functional-461050) Calling .DriverName
	I0908 10:43:58.402186  760330 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:43:58.402485  760330 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:43:58.402532  760330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:43:58.419442  760330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0908 10:43:58.420004  760330 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:43:58.420572  760330 main.go:141] libmachine: Using API Version  1
	I0908 10:43:58.420599  760330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:43:58.421000  760330 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:43:58.421363  760330 main.go:141] libmachine: (functional-461050) Calling .DriverName
	I0908 10:43:58.459649  760330 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 10:43:58.460909  760330 start.go:304] selected driver: kvm2
	I0908 10:43:58.460932  760330 start.go:918] validating driver "kvm2" against &{Name:functional-461050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-461050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:58.461070  760330 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 10:43:58.462434  760330 cni.go:84] Creating CNI manager for ""
	I0908 10:43:58.462534  760330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 10:43:58.462596  760330 start.go:348] cluster config:
	{Name:functional-461050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-461050 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:58.464318  760330 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.156877652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757328639156858120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252821,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30824a70-17dc-4899-a98e-07e4a2034084 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.157376737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0331857-3056-41f9-b873-e6a86a79bbd9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.157444512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0331857-3056-41f9-b873-e6a86a79bbd9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.157804862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2465502509fba98872645ee86e7b7d8d24543a54c460e7333981d0c67c75304,PodSandboxId:b92d8b377f41b9009d2317491536d1a9b4e3027b90da6eb12f35705b9f259883,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328273229849741,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-fw5qz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2,PodSandboxId:42c37503ba0d0ba35013a5a81b2843a343d04ca0f12f9bcd714b3255a67e856c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1757328261953824784,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-j2l4z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 005f62ec-d907-4634-8440-172c8ccf1a12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93315e08f71037755142dea223cde9aca88cd0a000ebe4e1cd3490d73b0e7f93,PodSandboxId:b3c23ffd124ef2034380713f8a51137c95dd59a1f2b9bda40090ad678d952ab2,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1757328253377907787,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c
4c-5q5bb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fee996c1-6cc1-40ab-ac79-b151d9dd80c0,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a,PodSandboxId:c52b97fd25574106f5f301cb15bc023284d6be134afebd2677aa35296eb110aa,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1757328247830561868,Labels:
map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21916ec1-56c0-46e0-bdf9-d3c96579dfa2,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06082000e0b2469e28d026ee49234b5a3145eef847c318ad3250da974c37c61b,PodSandboxId:8916b083f63dc77532e408884e71d2a9c568b89ebc269e88228a660c87b93040,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328241607127095,Labels:
map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-wq9fk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd5a8578-1c51-4e6b-8d77-4d87fce03552,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d,PodSandboxId:bef0ca7174e0c70088672416dbdc26326bde630bca37dbe6beb0fbd24092d015,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757328211864018997,Labels:map[string]string{io.kuberne
tes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa,PodSandboxId:7f6b2f8e008218d7a804e4ee5287fd631bc9363066303bb4eafd80b3ba11fd01,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757328211586335585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5,PodSandboxId:73b9e56c16e3fce1f101a13565883dbe4d17369865692aa1b1f469772dfda68e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},
Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757328211571950898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8,PodSandboxId:ed214e48ae62231e25573975f081b085b930bfbd839fb7388c01d31c13009055,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757328207854335152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80,PodSandboxId:d35943a7177356f18f1833370206365dc702ab989207d1d310f17c091a853fe5
,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757328207821862182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb85c2d4e963cea7ce40a051ab40d48
e2b71e73f1f2a083674e7f49f1a37cc7,PodSandboxId:512dc8cdc8af10e597dfa1be869a45d9334598a6f88f31d77240c82147fbd28f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757328207849080926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50abb8c50375d0fffaceb1a51106782,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4,PodSandboxId:9faab0c0964a359a16017488b57072b482b9cd1e747a7df662fbdb80b3cc648c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757328207806514510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec,PodSandboxId:4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1757328168220357549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807,PodSandboxId:a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757328167867400453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5,PodSandboxId:37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757328167878183595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d8
56718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad,PodSandboxId:9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757328165103427696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66b
c5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd,PodSandboxId:98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df8
71eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757328164423387996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf,PodSandboxId:26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d556
3dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757328164281881289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0331857-3056-41f9-b873-e6a86a79bbd9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.207740212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69d4ca31-0369-4bee-9143-b87e33570b01 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.207825614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69d4ca31-0369-4bee-9143-b87e33570b01 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.209838360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c4d7836-1bbc-43e5-abb4-22b0c3fc76eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.210947156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757328639210857808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252821,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c4d7836-1bbc-43e5-abb4-22b0c3fc76eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.212296149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=720e529a-4ea0-4d3e-acdf-0631c9253605 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.212384112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=720e529a-4ea0-4d3e-acdf-0631c9253605 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.212905461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2465502509fba98872645ee86e7b7d8d24543a54c460e7333981d0c67c75304,PodSandboxId:b92d8b377f41b9009d2317491536d1a9b4e3027b90da6eb12f35705b9f259883,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328273229849741,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-fw5qz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2,PodSandboxId:42c37503ba0d0ba35013a5a81b2843a343d04ca0f12f9bcd714b3255a67e856c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1757328261953824784,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-j2l4z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 005f62ec-d907-4634-8440-172c8ccf1a12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93315e08f71037755142dea223cde9aca88cd0a000ebe4e1cd3490d73b0e7f93,PodSandboxId:b3c23ffd124ef2034380713f8a51137c95dd59a1f2b9bda40090ad678d952ab2,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1757328253377907787,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c
4c-5q5bb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fee996c1-6cc1-40ab-ac79-b151d9dd80c0,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a,PodSandboxId:c52b97fd25574106f5f301cb15bc023284d6be134afebd2677aa35296eb110aa,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1757328247830561868,Labels:
map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21916ec1-56c0-46e0-bdf9-d3c96579dfa2,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06082000e0b2469e28d026ee49234b5a3145eef847c318ad3250da974c37c61b,PodSandboxId:8916b083f63dc77532e408884e71d2a9c568b89ebc269e88228a660c87b93040,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328241607127095,Labels:
map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-wq9fk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd5a8578-1c51-4e6b-8d77-4d87fce03552,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d,PodSandboxId:bef0ca7174e0c70088672416dbdc26326bde630bca37dbe6beb0fbd24092d015,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757328211864018997,Labels:map[string]string{io.kuberne
tes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa,PodSandboxId:7f6b2f8e008218d7a804e4ee5287fd631bc9363066303bb4eafd80b3ba11fd01,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757328211586335585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5,PodSandboxId:73b9e56c16e3fce1f101a13565883dbe4d17369865692aa1b1f469772dfda68e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},
Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757328211571950898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8,PodSandboxId:ed214e48ae62231e25573975f081b085b930bfbd839fb7388c01d31c13009055,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757328207854335152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80,PodSandboxId:d35943a7177356f18f1833370206365dc702ab989207d1d310f17c091a853fe5
,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757328207821862182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb85c2d4e963cea7ce40a051ab40d48
e2b71e73f1f2a083674e7f49f1a37cc7,PodSandboxId:512dc8cdc8af10e597dfa1be869a45d9334598a6f88f31d77240c82147fbd28f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757328207849080926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50abb8c50375d0fffaceb1a51106782,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4,PodSandboxId:9faab0c0964a359a16017488b57072b482b9cd1e747a7df662fbdb80b3cc648c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757328207806514510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec,PodSandboxId:4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1757328168220357549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807,PodSandboxId:a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757328167867400453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5,PodSandboxId:37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757328167878183595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d8
56718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad,PodSandboxId:9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757328165103427696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66b
c5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd,PodSandboxId:98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df8
71eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757328164423387996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf,PodSandboxId:26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d556
3dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757328164281881289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=720e529a-4ea0-4d3e-acdf-0631c9253605 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.251504541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e408bed-aa9c-466f-a341-00dd396d2450 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.251598138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e408bed-aa9c-466f-a341-00dd396d2450 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.252661440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8dabad79-2387-4e57-862b-68eee2cadfa0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.253454950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757328639253431717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252821,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8dabad79-2387-4e57-862b-68eee2cadfa0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.253976812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ae37997-1271-4895-873e-667f59790d0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.254030484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ae37997-1271-4895-873e-667f59790d0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.254463636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2465502509fba98872645ee86e7b7d8d24543a54c460e7333981d0c67c75304,PodSandboxId:b92d8b377f41b9009d2317491536d1a9b4e3027b90da6eb12f35705b9f259883,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328273229849741,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-fw5qz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2,PodSandboxId:42c37503ba0d0ba35013a5a81b2843a343d04ca0f12f9bcd714b3255a67e856c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1757328261953824784,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-j2l4z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 005f62ec-d907-4634-8440-172c8ccf1a12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93315e08f71037755142dea223cde9aca88cd0a000ebe4e1cd3490d73b0e7f93,PodSandboxId:b3c23ffd124ef2034380713f8a51137c95dd59a1f2b9bda40090ad678d952ab2,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1757328253377907787,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c
4c-5q5bb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fee996c1-6cc1-40ab-ac79-b151d9dd80c0,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a,PodSandboxId:c52b97fd25574106f5f301cb15bc023284d6be134afebd2677aa35296eb110aa,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1757328247830561868,Labels:
map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21916ec1-56c0-46e0-bdf9-d3c96579dfa2,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06082000e0b2469e28d026ee49234b5a3145eef847c318ad3250da974c37c61b,PodSandboxId:8916b083f63dc77532e408884e71d2a9c568b89ebc269e88228a660c87b93040,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328241607127095,Labels:
map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-wq9fk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd5a8578-1c51-4e6b-8d77-4d87fce03552,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d,PodSandboxId:bef0ca7174e0c70088672416dbdc26326bde630bca37dbe6beb0fbd24092d015,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757328211864018997,Labels:map[string]string{io.kuberne
tes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa,PodSandboxId:7f6b2f8e008218d7a804e4ee5287fd631bc9363066303bb4eafd80b3ba11fd01,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757328211586335585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5,PodSandboxId:73b9e56c16e3fce1f101a13565883dbe4d17369865692aa1b1f469772dfda68e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},
Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757328211571950898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8,PodSandboxId:ed214e48ae62231e25573975f081b085b930bfbd839fb7388c01d31c13009055,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757328207854335152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80,PodSandboxId:d35943a7177356f18f1833370206365dc702ab989207d1d310f17c091a853fe5
,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757328207821862182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb85c2d4e963cea7ce40a051ab40d48
e2b71e73f1f2a083674e7f49f1a37cc7,PodSandboxId:512dc8cdc8af10e597dfa1be869a45d9334598a6f88f31d77240c82147fbd28f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757328207849080926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50abb8c50375d0fffaceb1a51106782,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4,PodSandboxId:9faab0c0964a359a16017488b57072b482b9cd1e747a7df662fbdb80b3cc648c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757328207806514510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec,PodSandboxId:4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1757328168220357549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807,PodSandboxId:a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757328167867400453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5,PodSandboxId:37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757328167878183595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d8
56718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad,PodSandboxId:9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757328165103427696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66b
c5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd,PodSandboxId:98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df8
71eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757328164423387996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf,PodSandboxId:26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d556
3dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757328164281881289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ae37997-1271-4895-873e-667f59790d0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.300093947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea8dc4ca-43fc-454a-bc46-c5bd0559ce00 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.300428269Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea8dc4ca-43fc-454a-bc46-c5bd0559ce00 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.301923172Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac74c032-3699-48d3-aa4b-82c0d3f755b3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.302915192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757328639302894214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252821,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac74c032-3699-48d3-aa4b-82c0d3f755b3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.303655104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c413c896-9a98-4fa8-a34a-dcfa14401f2d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.303709144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c413c896-9a98-4fa8-a34a-dcfa14401f2d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:50:39 functional-461050 crio[5341]: time="2025-09-08 10:50:39.304108778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2465502509fba98872645ee86e7b7d8d24543a54c460e7333981d0c67c75304,PodSandboxId:b92d8b377f41b9009d2317491536d1a9b4e3027b90da6eb12f35705b9f259883,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328273229849741,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-fw5qz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2,PodSandboxId:42c37503ba0d0ba35013a5a81b2843a343d04ca0f12f9bcd714b3255a67e856c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1757328261953824784,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-j2l4z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 005f62ec-d907-4634-8440-172c8ccf1a12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93315e08f71037755142dea223cde9aca88cd0a000ebe4e1cd3490d73b0e7f93,PodSandboxId:b3c23ffd124ef2034380713f8a51137c95dd59a1f2b9bda40090ad678d952ab2,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1757328253377907787,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c
4c-5q5bb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fee996c1-6cc1-40ab-ac79-b151d9dd80c0,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a,PodSandboxId:c52b97fd25574106f5f301cb15bc023284d6be134afebd2677aa35296eb110aa,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1757328247830561868,Labels:
map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21916ec1-56c0-46e0-bdf9-d3c96579dfa2,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06082000e0b2469e28d026ee49234b5a3145eef847c318ad3250da974c37c61b,PodSandboxId:8916b083f63dc77532e408884e71d2a9c568b89ebc269e88228a660c87b93040,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328241607127095,Labels:
map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-wq9fk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd5a8578-1c51-4e6b-8d77-4d87fce03552,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d,PodSandboxId:bef0ca7174e0c70088672416dbdc26326bde630bca37dbe6beb0fbd24092d015,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757328211864018997,Labels:map[string]string{io.kuberne
tes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa,PodSandboxId:7f6b2f8e008218d7a804e4ee5287fd631bc9363066303bb4eafd80b3ba11fd01,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757328211586335585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5,PodSandboxId:73b9e56c16e3fce1f101a13565883dbe4d17369865692aa1b1f469772dfda68e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},
Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757328211571950898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8,PodSandboxId:ed214e48ae62231e25573975f081b085b930bfbd839fb7388c01d31c13009055,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757328207854335152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80,PodSandboxId:d35943a7177356f18f1833370206365dc702ab989207d1d310f17c091a853fe5
,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757328207821862182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb85c2d4e963cea7ce40a051ab40d48
e2b71e73f1f2a083674e7f49f1a37cc7,PodSandboxId:512dc8cdc8af10e597dfa1be869a45d9334598a6f88f31d77240c82147fbd28f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757328207849080926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50abb8c50375d0fffaceb1a51106782,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4,PodSandboxId:9faab0c0964a359a16017488b57072b482b9cd1e747a7df662fbdb80b3cc648c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757328207806514510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec,PodSandboxId:4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1757328168220357549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807,PodSandboxId:a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757328167867400453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5,PodSandboxId:37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757328167878183595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d8
56718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad,PodSandboxId:9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757328165103427696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66b
c5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd,PodSandboxId:98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df8
71eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757328164423387996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf,PodSandboxId:26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d556
3dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757328164281881289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c413c896-9a98-4fa8-a34a-dcfa14401f2d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b2465502509fb       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            6 minutes ago       Running             echo-server                 0                   b92d8b377f41b       hello-node-connect-7d85dfc575-fw5qz
	a4b7a138b55d8       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         6 minutes ago       Running             kubernetes-dashboard        0                   42c37503ba0d0       kubernetes-dashboard-855c9754f9-j2l4z
	93315e08f7103       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   6 minutes ago       Running             dashboard-metrics-scraper   0                   b3c23ffd124ef       dashboard-metrics-scraper-77bf4d6c4c-5q5bb
	40061b06348f6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              6 minutes ago       Exited              mount-munger                0                   c52b97fd25574       busybox-mount
	06082000e0b24       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            6 minutes ago       Running             echo-server                 0                   8916b083f63dc       hello-node-75c85bcc94-wq9fk
	764dd2644ce12       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 7 minutes ago       Running             coredns                     2                   bef0ca7174e0c       coredns-66bc5c9577-rhlvx
	48201cc965d2d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 7 minutes ago       Running             kube-proxy                  2                   7f6b2f8e00821       kube-proxy-zznjm
	19262bb7897fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 7 minutes ago       Running             storage-provisioner         3                   73b9e56c16e3f       storage-provisioner
	67f688e15e085       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 7 minutes ago       Running             etcd                        2                   ed214e48ae622       etcd-functional-461050
	ccb85c2d4e963       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 7 minutes ago       Running             kube-apiserver              0                   512dc8cdc8af1       kube-apiserver-functional-461050
	3cb2e655e9e0f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 7 minutes ago       Running             kube-scheduler              3                   d35943a717735       kube-scheduler-functional-461050
	815c7a8e4c5c8       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 7 minutes ago       Running             kube-controller-manager     3                   9faab0c0964a3       kube-controller-manager-functional-461050
	4acbfa7eaf68d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 7 minutes ago       Exited              storage-provisioner         2                   4b1bdc3a9221d       storage-provisioner
	70756be06d1d1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 7 minutes ago       Exited              kube-scheduler              2                   37fe97ef6f137       kube-scheduler-functional-461050
	6414bd6ed5137       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 7 minutes ago       Exited              kube-controller-manager     2                   a03ded27dfaaf       kube-controller-manager-functional-461050
	320c9a74122ef       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 7 minutes ago       Exited              coredns                     1                   9379051f9794b       coredns-66bc5c9577-rhlvx
	3db4cd9e96f65       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 7 minutes ago       Exited              kube-proxy                  1                   98bf8a6568fde       kube-proxy-zznjm
	0358a97a79f3a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 7 minutes ago       Exited              etcd                        1                   26fa07b45404b       etcd-functional-461050
	
	
	==> coredns [320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54198 - 30337 "HINFO IN 5301851271321554043.7617544016246431185. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012923391s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56984 - 36817 "HINFO IN 6878509914046829493.5908532982915739693. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015136575s
	
	
	==> describe nodes <==
	Name:               functional-461050
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-461050
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2
	                    minikube.k8s.io/name=functional-461050
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T10_41_54_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 10:41:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-461050
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 10:50:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 10:48:56 +0000   Mon, 08 Sep 2025 10:41:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 10:48:56 +0000   Mon, 08 Sep 2025 10:41:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 10:48:56 +0000   Mon, 08 Sep 2025 10:41:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 10:48:56 +0000   Mon, 08 Sep 2025 10:41:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    functional-461050
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 5514ec08cd9f46aa8b8ff6a001f5b7d6
	  System UUID:                5514ec08-cd9f-46aa-8b8f-f6a001f5b7d6
	  Boot ID:                    ea734251-d162-44bc-b246-e9ac04071e0d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wq9fk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  default                     hello-node-connect-7d85dfc575-fw5qz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  default                     mysql-5bb876957f-gskqp                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m12s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-rhlvx                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m41s
	  kube-system                 etcd-functional-461050                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m46s
	  kube-system                 kube-apiserver-functional-461050              250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-controller-manager-functional-461050     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-proxy-zznjm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m41s
	  kube-system                 kube-scheduler-functional-461050              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-5q5bb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j2l4z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m39s                  kube-proxy       
	  Normal  Starting                 7m7s                   kube-proxy       
	  Normal  Starting                 7m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m53s (x8 over 8m53s)  kubelet          Node functional-461050 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m53s (x8 over 8m53s)  kubelet          Node functional-461050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m53s (x7 over 8m53s)  kubelet          Node functional-461050 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m46s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m46s                  kubelet          Node functional-461050 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m46s                  kubelet          Node functional-461050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m46s                  kubelet          Node functional-461050 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m45s                  kubelet          Node functional-461050 status is now: NodeReady
	  Normal  RegisteredNode           8m42s                  node-controller  Node functional-461050 event: Registered Node functional-461050 in Controller
	  Normal  Starting                 7m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m52s (x8 over 7m52s)  kubelet          Node functional-461050 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m52s (x8 over 7m52s)  kubelet          Node functional-461050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m52s (x7 over 7m52s)  kubelet          Node functional-461050 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m47s                  node-controller  Node functional-461050 event: Registered Node functional-461050 in Controller
	  Normal  NodeHasNoDiskPressure    7m12s (x8 over 7m12s)  kubelet          Node functional-461050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m12s (x8 over 7m12s)  kubelet          Node functional-461050 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m12s (x7 over 7m12s)  kubelet          Node functional-461050 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m5s                   node-controller  Node functional-461050 event: Registered Node functional-461050 in Controller
	
	
	==> dmesg <==
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081351] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.090104] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.025206] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.718328] kauditd_printk_skb: 13 callbacks suppressed
	[Sep 8 10:42] kauditd_printk_skb: 248 callbacks suppressed
	[ +20.339698] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.119515] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.142244] kauditd_printk_skb: 313 callbacks suppressed
	[  +2.513734] kauditd_printk_skb: 67 callbacks suppressed
	[Sep 8 10:43] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.043386] kauditd_printk_skb: 167 callbacks suppressed
	[  +1.825130] kauditd_printk_skb: 192 callbacks suppressed
	[ +14.675467] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.210005] kauditd_printk_skb: 91 callbacks suppressed
	[Sep 8 10:44] kauditd_printk_skb: 173 callbacks suppressed
	[  +3.346734] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.770389] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.544685] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.366766] kauditd_printk_skb: 11 callbacks suppressed
	[  +3.441172] crun[9138]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +1.753922] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000045] kauditd_printk_skb: 35 callbacks suppressed
	[Sep 8 10:45] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf] <==
	{"level":"warn","ts":"2025-09-08T10:42:46.451371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.475369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.483316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.494964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.509173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.527497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.626324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56922","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T10:43:16.389880Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T10:43:16.389946Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-461050","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	{"level":"error","ts":"2025-09-08T10:43:16.390009Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T10:43:16.459343Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T10:43:16.460873Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T10:43:16.460913Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c23cd90330b5fc4f","current-leader-member-id":"c23cd90330b5fc4f"}
	{"level":"info","ts":"2025-09-08T10:43:16.460984Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T10:43:16.460992Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T10:43:16.460975Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T10:43:16.461041Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T10:43:16.461050Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T10:43:16.461087Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T10:43:16.461094Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T10:43:16.461100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.94:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T10:43:16.463931Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"error","ts":"2025-09-08T10:43:16.464009Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.94:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T10:43:16.464044Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2025-09-08T10:43:16.464052Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-461050","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	
	
	==> etcd [67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8] <==
	{"level":"warn","ts":"2025-09-08T10:43:29.873297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.886557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.897463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.909077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.919185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.929336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.945681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.968087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.975803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.985462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.010417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.026819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.035192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.046648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.056351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.145613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35650","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T10:44:21.768017Z","caller":"traceutil/trace.go:172","msg":"trace[111164942] linearizableReadLoop","detail":"{readStateIndex:918; appliedIndex:918; }","duration":"273.791376ms","start":"2025-09-08T10:44:21.494199Z","end":"2025-09-08T10:44:21.767990Z","steps":["trace[111164942] 'read index received'  (duration: 273.785768ms)","trace[111164942] 'applied index is now lower than readState.Index'  (duration: 4.773µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T10:44:21.768303Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"273.991406ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:44:21.768364Z","caller":"traceutil/trace.go:172","msg":"trace[206232410] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:836; }","duration":"274.157342ms","start":"2025-09-08T10:44:21.494195Z","end":"2025-09-08T10:44:21.768352Z","steps":["trace[206232410] 'agreement among raft nodes before linearized reading'  (duration: 273.965354ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:44:21.769140Z","caller":"traceutil/trace.go:172","msg":"trace[1596888450] transaction","detail":"{read_only:false; response_revision:837; number_of_response:1; }","duration":"414.592267ms","start":"2025-09-08T10:44:21.354539Z","end":"2025-09-08T10:44:21.769132Z","steps":["trace[1596888450] 'process raft request'  (duration: 414.010857ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:44:21.770862Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T10:44:21.354524Z","time spent":"414.790132ms","remote":"127.0.0.1:34848","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:836 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-09-08T10:45:06.299524Z","caller":"traceutil/trace.go:172","msg":"trace[2091774038] linearizableReadLoop","detail":"{readStateIndex:1019; appliedIndex:1019; }","duration":"124.59109ms","start":"2025-09-08T10:45:06.174903Z","end":"2025-09-08T10:45:06.299494Z","steps":["trace[2091774038] 'read index received'  (duration: 124.582756ms)","trace[2091774038] 'applied index is now lower than readState.Index'  (duration: 7.533µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T10:45:06.299645Z","caller":"traceutil/trace.go:172","msg":"trace[1387463592] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"221.289153ms","start":"2025-09-08T10:45:06.078345Z","end":"2025-09-08T10:45:06.299635Z","steps":["trace[1387463592] 'process raft request'  (duration: 221.191457ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:45:06.299696Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.746335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:45:06.299713Z","caller":"traceutil/trace.go:172","msg":"trace[1872582710] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:927; }","duration":"124.810577ms","start":"2025-09-08T10:45:06.174898Z","end":"2025-09-08T10:45:06.299708Z","steps":["trace[1872582710] 'agreement among raft nodes before linearized reading'  (duration: 124.723219ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:50:39 up 9 min,  0 users,  load average: 0.09, 0.21, 0.16
	Linux functional-461050 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ccb85c2d4e963cea7ce40a051ab40d48e2b71e73f1f2a083674e7f49f1a37cc7] <==
	I0908 10:43:32.399130       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 10:43:32.450674       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 10:43:32.461200       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 10:43:34.394570       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 10:43:34.587336       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 10:43:34.790929       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 10:43:51.503060       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.234.73"}
	I0908 10:43:56.701863       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.62.81"}
	I0908 10:43:59.614620       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 10:43:59.893592       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.179.41"}
	I0908 10:43:59.912005       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.210.240"}
	I0908 10:44:12.160749       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.232.226"}
	I0908 10:44:27.321315       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.42.130"}
	I0908 10:44:35.302034       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 10:44:37.285917       1 conn.go:339] Error on socket receive: read tcp 192.168.39.94:8441->192.168.39.1:59784: use of closed network connection
	I0908 10:44:39.666666       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:45:41.353971       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:45:44.507309       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:46:57.605662       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:47:10.109445       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:48:12.265675       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:48:27.292010       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:49:20.445739       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:49:31.355141       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:50:37.374111       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807] <==
	I0908 10:42:52.036410       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 10:42:52.036469       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-461050"
	I0908 10:42:52.036495       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 10:42:52.036529       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 10:42:52.040697       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0908 10:42:52.040765       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0908 10:42:52.040785       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 10:42:52.040791       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 10:42:52.040796       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 10:42:52.040904       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 10:42:52.043312       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 10:42:52.043429       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 10:42:52.044616       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 10:42:52.045794       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 10:42:52.048134       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 10:42:52.048202       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 10:42:52.054531       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 10:42:52.056809       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 10:42:52.056820       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 10:42:52.056825       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 10:42:52.061616       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 10:42:52.063597       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 10:42:52.065166       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 10:42:52.065291       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0908 10:42:52.083511       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4] <==
	I0908 10:43:34.384861       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0908 10:43:34.385035       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 10:43:34.386670       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 10:43:34.387696       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 10:43:34.388968       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 10:43:34.389102       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 10:43:34.391458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 10:43:34.391498       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 10:43:34.393897       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 10:43:34.408249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 10:43:34.413579       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 10:43:34.418925       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 10:43:34.421548       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 10:43:34.426857       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 10:43:34.433541       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 10:43:34.434776       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 10:43:34.434836       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 10:43:34.434860       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	E0908 10:43:59.726462       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.726698       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.739571       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.741059       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.750861       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.751145       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.763575       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd] <==
	I0908 10:42:45.379281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 10:42:47.582380       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 10:42:47.582421       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.94"]
	E0908 10:42:47.582476       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 10:42:47.647294       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 10:42:47.647372       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 10:42:47.647395       1 server_linux.go:132] "Using iptables Proxier"
	I0908 10:42:47.661637       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 10:42:47.662524       1 server.go:527] "Version info" version="v1.34.0"
	I0908 10:42:47.662538       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:42:47.665344       1 config.go:106] "Starting endpoint slice config controller"
	I0908 10:42:47.672637       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 10:42:47.665696       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 10:42:47.672701       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 10:42:47.670525       1 config.go:200] "Starting service config controller"
	I0908 10:42:47.672730       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 10:42:47.673667       1 config.go:309] "Starting node config controller"
	I0908 10:42:47.675528       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 10:42:47.677288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 10:42:47.772980       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 10:42:47.773044       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 10:42:47.772816       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa] <==
	I0908 10:43:31.923523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 10:43:32.024425       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 10:43:32.024467       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.94"]
	E0908 10:43:32.026200       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 10:43:32.097766       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 10:43:32.097861       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 10:43:32.097907       1 server_linux.go:132] "Using iptables Proxier"
	I0908 10:43:32.114530       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 10:43:32.115926       1 server.go:527] "Version info" version="v1.34.0"
	I0908 10:43:32.115974       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:43:32.126604       1 config.go:200] "Starting service config controller"
	I0908 10:43:32.126616       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 10:43:32.126630       1 config.go:106] "Starting endpoint slice config controller"
	I0908 10:43:32.126634       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 10:43:32.126642       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 10:43:32.126646       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 10:43:32.131390       1 config.go:309] "Starting node config controller"
	I0908 10:43:32.131529       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 10:43:32.131555       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 10:43:32.227296       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 10:43:32.227329       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 10:43:32.227561       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80] <==
	I0908 10:43:30.263359       1 serving.go:386] Generated self-signed cert in-memory
	I0908 10:43:31.021731       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 10:43:31.021824       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:43:31.033155       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 10:43:31.033316       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 10:43:31.033344       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 10:43:31.033362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 10:43:31.040168       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:31.040202       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:31.040261       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:31.040268       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:31.133903       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 10:43:31.141310       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:31.141630       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5] <==
	I0908 10:42:49.281354       1 serving.go:386] Generated self-signed cert in-memory
	I0908 10:42:50.464864       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 10:42:50.464904       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:42:50.482703       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 10:42:50.482857       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 10:42:50.482883       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 10:42:50.482917       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 10:42:50.493459       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:42:50.493495       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:42:50.493513       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:42:50.493517       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:42:50.582968       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 10:42:50.593922       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:42:50.594004       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:16.383153       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 10:43:16.387383       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 10:43:16.387619       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 10:43:16.387841       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:16.388046       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:16.388420       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0908 10:43:16.390176       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 10:43:16.392891       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 08 10:49:53 functional-461050 kubelet[5683]: E0908 10:49:53.708040    5683 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 08 10:49:53 functional-461050 kubelet[5683]: E0908 10:49:53.708108    5683 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(4d3dbe5d-2821-433e-b310-606725bae985): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 10:49:53 functional-461050 kubelet[5683]: E0908 10:49:53.708137    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d3dbe5d-2821-433e-b310-606725bae985"
	Sep 08 10:49:57 functional-461050 kubelet[5683]: E0908 10:49:57.383096    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328597382398114  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:49:57 functional-461050 kubelet[5683]: E0908 10:49:57.383120    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328597382398114  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:50:00 functional-461050 kubelet[5683]: E0908 10:50:00.143195    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-gskqp" podUID="3b715ee3-43fc-4c1f-a057-a7722ba4ec27"
	Sep 08 10:50:04 functional-461050 kubelet[5683]: E0908 10:50:04.142926    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d3dbe5d-2821-433e-b310-606725bae985"
	Sep 08 10:50:07 functional-461050 kubelet[5683]: E0908 10:50:07.385585    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328607385169267  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:50:07 functional-461050 kubelet[5683]: E0908 10:50:07.385627    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328607385169267  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:50:15 functional-461050 kubelet[5683]: E0908 10:50:15.144317    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-gskqp" podUID="3b715ee3-43fc-4c1f-a057-a7722ba4ec27"
	Sep 08 10:50:17 functional-461050 kubelet[5683]: E0908 10:50:17.387873    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328617387417942  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:50:17 functional-461050 kubelet[5683]: E0908 10:50:17.387902    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328617387417942  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:50:19 functional-461050 kubelet[5683]: E0908 10:50:19.142871    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d3dbe5d-2821-433e-b310-606725bae985"
	Sep 08 10:50:27 functional-461050 kubelet[5683]: E0908 10:50:27.211113    5683 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1d437c4d856718e00a20cde2c7e3ac68/crio-37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e: Error finding container 37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e: Status 404 returned error can't find the container with id 37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e
	Sep 08 10:50:27 functional-461050 kubelet[5683]: E0908 10:50:27.211971    5683 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod85c4f9e2b7583adb7ba45dc12ba2d33e/crio-26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c: Error finding container 26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c: Status 404 returned error can't find the container with id 26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c
	Sep 08 10:50:27 functional-461050 kubelet[5683]: E0908 10:50:27.212396    5683 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6051e95a-99dc-43f5-95ea-02ad00ac17b7/crio-98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079: Error finding container 98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079: Status 404 returned error can't find the container with id 98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079
	Sep 08 10:50:27 functional-461050 kubelet[5683]: E0908 10:50:27.212728    5683 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb3b2c4a9245ed10cb68fb667e38cfc5f/crio-a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667: Error finding container a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667: Status 404 returned error can't find the container with id a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667
	Sep 08 10:50:27 functional-461050 kubelet[5683]: E0908 10:50:27.213107    5683 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod8c353654-7f8e-4829-9036-8590e8c92f15/crio-9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c: Error finding container 9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c: Status 404 returned error can't find the container with id 9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c
	Sep 08 10:50:27 functional-461050 kubelet[5683]: E0908 10:50:27.213460    5683 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pode924b08f-e5ee-4dce-a376-2ad37c5552fb/crio-4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4: Error finding container 4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4: Status 404 returned error can't find the container with id 4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4
	Sep 08 10:50:27 functional-461050 kubelet[5683]: E0908 10:50:27.389828    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328627389444950  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:50:27 functional-461050 kubelet[5683]: E0908 10:50:27.389849    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328627389444950  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:50:29 functional-461050 kubelet[5683]: E0908 10:50:29.144957    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-gskqp" podUID="3b715ee3-43fc-4c1f-a057-a7722ba4ec27"
	Sep 08 10:50:30 functional-461050 kubelet[5683]: E0908 10:50:30.142928    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d3dbe5d-2821-433e-b310-606725bae985"
	Sep 08 10:50:37 functional-461050 kubelet[5683]: E0908 10:50:37.392427    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328637391953307  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:50:37 functional-461050 kubelet[5683]: E0908 10:50:37.392467    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328637391953307  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	
	
	==> kubernetes-dashboard [a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2] <==
	2025/09/08 10:44:22 Using namespace: kubernetes-dashboard
	2025/09/08 10:44:22 Using in-cluster config to connect to apiserver
	2025/09/08 10:44:22 Using secret token for csrf signing
	2025/09/08 10:44:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/08 10:44:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/08 10:44:22 Successful initial request to the apiserver, version: v1.34.0
	2025/09/08 10:44:22 Generating JWE encryption key
	2025/09/08 10:44:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/08 10:44:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/08 10:44:22 Initializing JWE encryption key from synchronized object
	2025/09/08 10:44:22 Creating in-cluster Sidecar client
	2025/09/08 10:44:22 Successful request to sidecar
	2025/09/08 10:44:22 Serving insecurely on HTTP port: 9090
	2025/09/08 10:44:22 Starting overwatch
	
	
	==> storage-provisioner [19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5] <==
	W0908 10:50:15.875480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:17.878312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:17.883525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:19.886755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:19.891044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:21.894811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:21.903491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:23.907038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:23.911609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:25.915337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:25.923081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:27.926515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:27.932328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:29.935050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:29.939574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:31.942680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:31.947622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:33.951303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:33.955974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:35.959315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:35.967686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:37.971073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:37.977652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:39.981385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:50:39.991674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec] <==
	I0908 10:42:48.380312       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 10:42:48.380361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 10:42:48.391572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:51.846747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:56.109649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:59.708473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:02.763191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:05.785336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:05.790953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 10:43:05.791069       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 10:43:05.792034       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"91dc6c72-6e62-464a-b160-6bf12ed3eb48", APIVersion:"v1", ResourceVersion:"539", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-461050_b72e6edd-2e6b-4aaa-85e6-7d7bacf275aa became leader
	I0908 10:43:05.792543       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-461050_b72e6edd-2e6b-4aaa-85e6-7d7bacf275aa!
	W0908 10:43:05.793879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:05.802445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 10:43:05.893185       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-461050_b72e6edd-2e6b-4aaa-85e6-7d7bacf275aa!
	W0908 10:43:07.805670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:07.811422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:09.814845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:09.818865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:11.822319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:11.831207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:13.834021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:13.840989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:15.844885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:15.850486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-461050 -n functional-461050
helpers_test.go:269: (dbg) Run:  kubectl --context functional-461050 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-gskqp sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-461050 describe pod busybox-mount mysql-5bb876957f-gskqp sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-461050 describe pod busybox-mount mysql-5bb876957f-gskqp sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-461050/192.168.39.94
	Start Time:       Mon, 08 Sep 2025 10:43:58 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 10:44:07 +0000
	      Finished:     Mon, 08 Sep 2025 10:44:07 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zg898 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zg898:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m41s  default-scheduler  Successfully assigned default/busybox-mount to functional-461050
	  Normal  Pulling    6m41s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m33s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 6.222s (8.398s including waiting). Image size: 4631262 bytes.
	  Normal  Created    6m33s  kubelet            Created container: mount-munger
	  Normal  Started    6m33s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-gskqp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-461050/192.168.39.94
	Start Time:       Mon, 08 Sep 2025 10:44:27 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glh28 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-glh28:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m13s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-gskqp to functional-461050
	  Warning  Failed     3m19s (x2 over 4m26s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m26s (x4 over 6m9s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     80s (x2 over 5m33s)    kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     80s (x4 over 5m33s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x10 over 5m33s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     11s (x10 over 5m33s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-461050/192.168.39.94
	Start Time:       Mon, 08 Sep 2025 10:44:37 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cjrnx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cjrnx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-461050
	  Warning  Failed     3m52s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    115s (x4 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     47s (x3 over 5m)     kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     47s (x4 over 5m)     kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x8 over 4m59s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     10s (x8 over 4m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
E0908 10:53:30.883094  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (401.83s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-461050 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-gskqp" [3b715ee3-43fc-4c1f-a057-a7722ba4ec27] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/09/08 10:44:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-461050 -n functional-461050
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-08 10:54:27.671184154 +0000 UTC m=+1519.633587328
functional_test.go:1804: (dbg) Run:  kubectl --context functional-461050 describe po mysql-5bb876957f-gskqp -n default
functional_test.go:1804: (dbg) kubectl --context functional-461050 describe po mysql-5bb876957f-gskqp -n default:
Name:             mysql-5bb876957f-gskqp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-461050/192.168.39.94
Start Time:       Mon, 08 Sep 2025 10:44:27 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glh28 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-glh28:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-gskqp to functional-461050
Warning  Failed     7m6s (x2 over 8m13s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m43s (x5 over 9m56s)  kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     2m37s (x3 over 9m20s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m37s (x5 over 9m20s)  kubelet            Error: ErrImagePull
Warning  Failed     82s (x16 over 9m20s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    15s (x21 over 9m20s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-461050 logs mysql-5bb876957f-gskqp -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-461050 logs mysql-5bb876957f-gskqp -n default: exit status 1 (70.221578ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-gskqp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-461050 logs mysql-5bb876957f-gskqp -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-461050 -n functional-461050
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 logs -n 25: (1.507016603s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-461050 ssh findmnt -T /mount3                                                                                                                     │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ mount          │ -p functional-461050 --kill=true                                                                                                                             │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │                     │
	│ image          │ functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr                                                                │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr                                                                │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr                                                                │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image save kicbase/echo-server:functional-461050 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image rm kicbase/echo-server:functional-461050 --alsologtostderr                                                                           │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image save --daemon kicbase/echo-server:functional-461050 --alsologtostderr                                                                │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ update-context │ functional-461050 update-context --alsologtostderr -v=2                                                                                                      │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ update-context │ functional-461050 update-context --alsologtostderr -v=2                                                                                                      │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ update-context │ functional-461050 update-context --alsologtostderr -v=2                                                                                                      │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls --format short --alsologtostderr                                                                                                  │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls --format yaml --alsologtostderr                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ ssh            │ functional-461050 ssh pgrep buildkitd                                                                                                                        │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │                     │
	│ image          │ functional-461050 image build -t localhost/my-image:functional-461050 testdata/build --alsologtostderr                                                       │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls                                                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls --format json --alsologtostderr                                                                                                   │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-461050 image ls --format table --alsologtostderr                                                                                                  │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ service        │ functional-461050 service hello-node-connect --url                                                                                                           │ functional-461050 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:43:58
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:43:58.371071  760330 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:43:58.371181  760330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:58.371193  760330 out.go:374] Setting ErrFile to fd 2...
	I0908 10:43:58.371198  760330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:58.371423  760330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 10:43:58.372002  760330 out.go:368] Setting JSON to false
	I0908 10:43:58.372982  760330 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":69954,"bootTime":1757258284,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:43:58.373039  760330 start.go:140] virtualization: kvm guest
	I0908 10:43:58.374661  760330 out.go:179] * [functional-461050] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:43:58.375796  760330 notify.go:220] Checking for updates...
	I0908 10:43:58.375807  760330 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 10:43:58.377550  760330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:43:58.378731  760330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 10:43:58.379884  760330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 10:43:58.380865  760330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 10:43:58.381911  760330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 10:43:58.383534  760330 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:43:58.384061  760330 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:43:58.384148  760330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:43:58.400239  760330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0908 10:43:58.400753  760330 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:43:58.401322  760330 main.go:141] libmachine: Using API Version  1
	I0908 10:43:58.401352  760330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:43:58.401671  760330 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:43:58.401886  760330 main.go:141] libmachine: (functional-461050) Calling .DriverName
	I0908 10:43:58.402186  760330 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:43:58.402485  760330 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:43:58.402532  760330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:43:58.419442  760330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0908 10:43:58.420004  760330 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:43:58.420572  760330 main.go:141] libmachine: Using API Version  1
	I0908 10:43:58.420599  760330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:43:58.421000  760330 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:43:58.421363  760330 main.go:141] libmachine: (functional-461050) Calling .DriverName
	I0908 10:43:58.459649  760330 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 10:43:58.460909  760330 start.go:304] selected driver: kvm2
	I0908 10:43:58.460932  760330 start.go:918] validating driver "kvm2" against &{Name:functional-461050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-461050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:58.461070  760330 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 10:43:58.462434  760330 cni.go:84] Creating CNI manager for ""
	I0908 10:43:58.462534  760330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 10:43:58.462596  760330 start.go:348] cluster config:
	{Name:functional-461050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-461050 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:58.464318  760330 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.580465630Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:754cce2f8bacaa3f4a5bbc6b9dd4ed04cf14427e9e0697607a8c9320c27f4027,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:4d3dbe5d-2821-433e-b310-606725bae985,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757328278311325121,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d3dbe5d-2821-433e-b310-606725bae985,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volu
mes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-09-08T10:44:37.986542990Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:136cf268ccf7f4938da9027460bcb746b55e4305601ee1789ee1a834a598ed8c,Metadata:&PodSandboxMetadata{Name:mysql-5bb876957f-gskqp,Uid:3b715ee3-43fc-4c1f-a057-a7722ba4ec27,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757328267762029511,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-5bb876957f-gskqp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b715ee3-43fc-4c1f-a057-a7722ba4ec27,pod-template-hash: 5bb876957f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:44:27.438761456Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b92d8b377f41b9009d2317491536d1a9b4e3027b90da6eb12f35705b9f259883,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-fw5qz,Uid:ad93be4e-3abd-4fc6-a8d6-2d44
ecab1f22,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757328252408869363,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-fw5qz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:44:12.083734255Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42c37503ba0d0ba35013a5a81b2843a343d04ca0f12f9bcd714b3255a67e856c,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-j2l4z,Uid:005f62ec-d907-4634-8440-172c8ccf1a12,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757328240390758407,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-j2l4z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 005f62ec-d907-4
634-8440-172c8ccf1a12,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:43:59.776374527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b3c23ffd124ef2034380713f8a51137c95dd59a1f2b9bda40090ad678d952ab2,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-5q5bb,Uid:fee996c1-6cc1-40ab-ac79-b151d9dd80c0,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757328240140160324,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-5q5bb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fee996c1-6cc1-40ab-ac79-b151d9dd80c0,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:43:59.818301998Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSa
ndbox{Id:c52b97fd25574106f5f301cb15bc023284d6be134afebd2677aa35296eb110aa,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:21916ec1-56c0-46e0-bdf9-d3c96579dfa2,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1757328239114362807,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21916ec1-56c0-46e0-bdf9-d3c96579dfa2,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:43:58.796546358Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8916b083f63dc77532e408884e71d2a9c568b89ebc269e88228a660c87b93040,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-wq9fk,Uid:cd5a8578-1c51-4e6b-8d77-4d87fce03552,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757328236965766132,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-wq9fk,io.kubernetes.pod.n
amespace: default,io.kubernetes.pod.uid: cd5a8578-1c51-4e6b-8d77-4d87fce03552,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:43:56.641686510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bef0ca7174e0c70088672416dbdc26326bde630bca37dbe6beb0fbd24092d015,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-rhlvx,Uid:8c353654-7f8e-4829-9036-8590e8c92f15,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1757328211495182971,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:43:31.064355205Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7f6b2f8e008218d7a804e4ee5287fd631bc9363066303bb4eafd80b3ba11fd01,Metadata:&PodSandboxMetadata{
Name:kube-proxy-zznjm,Uid:6051e95a-99dc-43f5-95ea-02ad00ac17b7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1757328211403329509,Labels:map[string]string{controller-revision-hash: 6f475c7966,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:43:31.064364746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73b9e56c16e3fce1f101a13565883dbe4d17369865692aa1b1f469772dfda68e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e924b08f-e5ee-4dce-a376-2ad37c5552fb,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1757328211391010516,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-08T10:43:31.064367196Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed214e48ae62231e25573975f081b085b930bfbd839fb7388c01d31c130
09055,Metadata:&PodSandboxMetadata{Name:etcd-functional-461050,Uid:85c4f9e2b7583adb7ba45dc12ba2d33e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1757328207598098620,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.94:2379,kubernetes.io/config.hash: 85c4f9e2b7583adb7ba45dc12ba2d33e,kubernetes.io/config.seen: 2025-09-08T10:43:27.066303774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9faab0c0964a359a16017488b57072b482b9cd1e747a7df662fbdb80b3cc648c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-461050,Uid:b3b2c4a9245ed10cb68fb667e38cfc5f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1757328207593699691,Labels:map[string]string{component: kube-co
ntroller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3b2c4a9245ed10cb68fb667e38cfc5f,kubernetes.io/config.seen: 2025-09-08T10:43:27.066305842Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d35943a7177356f18f1833370206365dc702ab989207d1d310f17c091a853fe5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-461050,Uid:1d437c4d856718e00a20cde2c7e3ac68,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1757328207580970863,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash:
1d437c4d856718e00a20cde2c7e3ac68,kubernetes.io/config.seen: 2025-09-08T10:43:27.066300868Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:512dc8cdc8af10e597dfa1be869a45d9334598a6f88f31d77240c82147fbd28f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-461050,Uid:f50abb8c50375d0fffaceb1a51106782,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757328207570989514,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50abb8c50375d0fffaceb1a51106782,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.94:8441,kubernetes.io/config.hash: f50abb8c50375d0fffaceb1a51106782,kubernetes.io/config.seen: 2025-09-08T10:43:27.066304810Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a5
5cad41cddb9ff7c,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-rhlvx,Uid:8c353654-7f8e-4829-9036-8590e8c92f15,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1757328164043031365,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:41:58.910649572Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079,Metadata:&PodSandboxMetadata{Name:kube-proxy-zznjm,Uid:6051e95a-99dc-43f5-95ea-02ad00ac17b7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1757328163935292306,Labels:map[string]string{controller-revision-hash: 6f475c7966,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T10:41:58.755917615Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-461050,Uid:1d437c4d856718e00a20cde2c7e3ac68,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1757328163867040380,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1d437c4d856718e00a20cde2c7e3ac68,kubernetes.io/config.seen: 2025-09-08T10:41:53.337521267Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a03ded27d
faaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-461050,Uid:b3b2c4a9245ed10cb68fb667e38cfc5f,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1757328163761975592,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3b2c4a9245ed10cb68fb667e38cfc5f,kubernetes.io/config.seen: 2025-09-08T10:41:53.337526167Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c,Metadata:&PodSandboxMetadata{Name:etcd-functional-461050,Uid:85c4f9e2b7583adb7ba45dc12ba2d33e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1757328163750624980,Labels:map[string]string{c
omponent: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.94:2379,kubernetes.io/config.hash: 85c4f9e2b7583adb7ba45dc12ba2d33e,kubernetes.io/config.seen: 2025-09-08T10:41:53.337523877Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e924b08f-e5ee-4dce-a376-2ad37c5552fb,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1757328163722729620,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a
376-2ad37c5552fb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-08T10:41:59.788022165Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=87b51b54-f018-4345-815b-1646e1a017ce name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.581392903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f10c500-a15c-4ae3-ae33-8eac0efc32d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.581446373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f10c500-a15c-4ae3-ae33-8eac0efc32d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.581784295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2465502509fba98872645ee86e7b7d8d24543a54c460e7333981d0c67c75304,PodSandboxId:b92d8b377f41b9009d2317491536d1a9b4e3027b90da6eb12f35705b9f259883,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328273229849741,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-fw5qz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2,PodSandboxId:42c37503ba0d0ba35013a5a81b2843a343d04ca0f12f9bcd714b3255a67e856c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1757328261953824784,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-j2l4z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 005f62ec-d907-4634-8440-172c8ccf1a12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93315e08f71037755142dea223cde9aca88cd0a000ebe4e1cd3490d73b0e7f93,PodSandboxId:b3c23ffd124ef2034380713f8a51137c95dd59a1f2b9bda40090ad678d952ab2,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1757328253377907787,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c
4c-5q5bb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fee996c1-6cc1-40ab-ac79-b151d9dd80c0,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a,PodSandboxId:c52b97fd25574106f5f301cb15bc023284d6be134afebd2677aa35296eb110aa,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1757328247830561868,Labels:
map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21916ec1-56c0-46e0-bdf9-d3c96579dfa2,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06082000e0b2469e28d026ee49234b5a3145eef847c318ad3250da974c37c61b,PodSandboxId:8916b083f63dc77532e408884e71d2a9c568b89ebc269e88228a660c87b93040,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328241607127095,Labels:
map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-wq9fk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd5a8578-1c51-4e6b-8d77-4d87fce03552,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d,PodSandboxId:bef0ca7174e0c70088672416dbdc26326bde630bca37dbe6beb0fbd24092d015,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757328211864018997,Labels:map[string]string{io.kuberne
tes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa,PodSandboxId:7f6b2f8e008218d7a804e4ee5287fd631bc9363066303bb4eafd80b3ba11fd01,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757328211586335585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5,PodSandboxId:73b9e56c16e3fce1f101a13565883dbe4d17369865692aa1b1f469772dfda68e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},
Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757328211571950898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8,PodSandboxId:ed214e48ae62231e25573975f081b085b930bfbd839fb7388c01d31c13009055,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757328207854335152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80,PodSandboxId:d35943a7177356f18f1833370206365dc702ab989207d1d310f17c091a853fe5
,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757328207821862182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb85c2d4e963cea7ce40a051ab40d48
e2b71e73f1f2a083674e7f49f1a37cc7,PodSandboxId:512dc8cdc8af10e597dfa1be869a45d9334598a6f88f31d77240c82147fbd28f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757328207849080926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50abb8c50375d0fffaceb1a51106782,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4,PodSandboxId:9faab0c0964a359a16017488b57072b482b9cd1e747a7df662fbdb80b3cc648c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757328207806514510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec,PodSandboxId:4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1757328168220357549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807,PodSandboxId:a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757328167867400453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5,PodSandboxId:37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757328167878183595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d8
56718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad,PodSandboxId:9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757328165103427696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66b
c5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd,PodSandboxId:98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df8
71eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757328164423387996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf,PodSandboxId:26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d556
3dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757328164281881289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f10c500-a15c-4ae3-ae33-8eac0efc32d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.583503187Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a55aca1-69bc-43b5-a07e-037045fa65a1 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.583725386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a55aca1-69bc-43b5-a07e-037045fa65a1 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.585315217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d259d8e-d72a-4de4-92a4-ff7bfb0830ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.586064175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757328868586044674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252821,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d259d8e-d72a-4de4-92a4-ff7bfb0830ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.586669140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bfd4ada-0805-4418-ac15-074a5b6b31af name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.586717242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bfd4ada-0805-4418-ac15-074a5b6b31af name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.587040012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2465502509fba98872645ee86e7b7d8d24543a54c460e7333981d0c67c75304,PodSandboxId:b92d8b377f41b9009d2317491536d1a9b4e3027b90da6eb12f35705b9f259883,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328273229849741,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-fw5qz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2,PodSandboxId:42c37503ba0d0ba35013a5a81b2843a343d04ca0f12f9bcd714b3255a67e856c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1757328261953824784,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-j2l4z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 005f62ec-d907-4634-8440-172c8ccf1a12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93315e08f71037755142dea223cde9aca88cd0a000ebe4e1cd3490d73b0e7f93,PodSandboxId:b3c23ffd124ef2034380713f8a51137c95dd59a1f2b9bda40090ad678d952ab2,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1757328253377907787,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c
4c-5q5bb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fee996c1-6cc1-40ab-ac79-b151d9dd80c0,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a,PodSandboxId:c52b97fd25574106f5f301cb15bc023284d6be134afebd2677aa35296eb110aa,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1757328247830561868,Labels:
map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21916ec1-56c0-46e0-bdf9-d3c96579dfa2,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06082000e0b2469e28d026ee49234b5a3145eef847c318ad3250da974c37c61b,PodSandboxId:8916b083f63dc77532e408884e71d2a9c568b89ebc269e88228a660c87b93040,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328241607127095,Labels:
map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-wq9fk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd5a8578-1c51-4e6b-8d77-4d87fce03552,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d,PodSandboxId:bef0ca7174e0c70088672416dbdc26326bde630bca37dbe6beb0fbd24092d015,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757328211864018997,Labels:map[string]string{io.kuberne
tes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa,PodSandboxId:7f6b2f8e008218d7a804e4ee5287fd631bc9363066303bb4eafd80b3ba11fd01,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757328211586335585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5,PodSandboxId:73b9e56c16e3fce1f101a13565883dbe4d17369865692aa1b1f469772dfda68e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},
Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757328211571950898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8,PodSandboxId:ed214e48ae62231e25573975f081b085b930bfbd839fb7388c01d31c13009055,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757328207854335152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80,PodSandboxId:d35943a7177356f18f1833370206365dc702ab989207d1d310f17c091a853fe5
,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757328207821862182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb85c2d4e963cea7ce40a051ab40d48
e2b71e73f1f2a083674e7f49f1a37cc7,PodSandboxId:512dc8cdc8af10e597dfa1be869a45d9334598a6f88f31d77240c82147fbd28f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757328207849080926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50abb8c50375d0fffaceb1a51106782,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4,PodSandboxId:9faab0c0964a359a16017488b57072b482b9cd1e747a7df662fbdb80b3cc648c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757328207806514510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec,PodSandboxId:4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1757328168220357549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807,PodSandboxId:a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757328167867400453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5,PodSandboxId:37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757328167878183595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d8
56718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad,PodSandboxId:9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757328165103427696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66b
c5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd,PodSandboxId:98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df8
71eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757328164423387996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf,PodSandboxId:26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d556
3dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757328164281881289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bfd4ada-0805-4418-ac15-074a5b6b31af name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.623591718Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5490334-965c-4dfa-9f0c-cbc92389ab90 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.623716453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5490334-965c-4dfa-9f0c-cbc92389ab90 name=/runtime.v1.RuntimeService/Version
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.624820815Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f332c61b-ba9e-4b6b-a9bd-99f204dc208b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.625698103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757328868625674879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252821,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f332c61b-ba9e-4b6b-a9bd-99f204dc208b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.626284557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f914cd86-5316-45f8-842a-1919cd021a1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.626502154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f914cd86-5316-45f8-842a-1919cd021a1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.626954507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2465502509fba98872645ee86e7b7d8d24543a54c460e7333981d0c67c75304,PodSandboxId:b92d8b377f41b9009d2317491536d1a9b4e3027b90da6eb12f35705b9f259883,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328273229849741,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-fw5qz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2,PodSandboxId:42c37503ba0d0ba35013a5a81b2843a343d04ca0f12f9bcd714b3255a67e856c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1757328261953824784,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-j2l4z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 005f62ec-d907-4634-8440-172c8ccf1a12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93315e08f71037755142dea223cde9aca88cd0a000ebe4e1cd3490d73b0e7f93,PodSandboxId:b3c23ffd124ef2034380713f8a51137c95dd59a1f2b9bda40090ad678d952ab2,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1757328253377907787,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c
4c-5q5bb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fee996c1-6cc1-40ab-ac79-b151d9dd80c0,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a,PodSandboxId:c52b97fd25574106f5f301cb15bc023284d6be134afebd2677aa35296eb110aa,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1757328247830561868,Labels:
map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21916ec1-56c0-46e0-bdf9-d3c96579dfa2,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06082000e0b2469e28d026ee49234b5a3145eef847c318ad3250da974c37c61b,PodSandboxId:8916b083f63dc77532e408884e71d2a9c568b89ebc269e88228a660c87b93040,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328241607127095,Labels:
map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-wq9fk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd5a8578-1c51-4e6b-8d77-4d87fce03552,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d,PodSandboxId:bef0ca7174e0c70088672416dbdc26326bde630bca37dbe6beb0fbd24092d015,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757328211864018997,Labels:map[string]string{io.kuberne
tes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa,PodSandboxId:7f6b2f8e008218d7a804e4ee5287fd631bc9363066303bb4eafd80b3ba11fd01,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757328211586335585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5,PodSandboxId:73b9e56c16e3fce1f101a13565883dbe4d17369865692aa1b1f469772dfda68e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},
Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757328211571950898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8,PodSandboxId:ed214e48ae62231e25573975f081b085b930bfbd839fb7388c01d31c13009055,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757328207854335152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80,PodSandboxId:d35943a7177356f18f1833370206365dc702ab989207d1d310f17c091a853fe5
,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757328207821862182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb85c2d4e963cea7ce40a051ab40d48
e2b71e73f1f2a083674e7f49f1a37cc7,PodSandboxId:512dc8cdc8af10e597dfa1be869a45d9334598a6f88f31d77240c82147fbd28f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757328207849080926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50abb8c50375d0fffaceb1a51106782,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4,PodSandboxId:9faab0c0964a359a16017488b57072b482b9cd1e747a7df662fbdb80b3cc648c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757328207806514510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec,PodSandboxId:4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1757328168220357549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807,PodSandboxId:a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757328167867400453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5,PodSandboxId:37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757328167878183595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d8
56718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad,PodSandboxId:9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757328165103427696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66b
c5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd,PodSandboxId:98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df8
71eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757328164423387996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf,PodSandboxId:26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d556
3dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757328164281881289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f914cd86-5316-45f8-842a-1919cd021a1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.669266104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ff9893d-7be3-4833-99b0-6454e2fd108b name=/runtime.v1.RuntimeService/Version
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.669354627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ff9893d-7be3-4833-99b0-6454e2fd108b name=/runtime.v1.RuntimeService/Version
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.670628701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8cbace9-508c-4d35-9523-cf3333c818dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.671821453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757328868671677136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252821,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8cbace9-508c-4d35-9523-cf3333c818dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.672716084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=415c3575-a3a6-4922-a634-55abc02ade01 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.672813417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=415c3575-a3a6-4922-a634-55abc02ade01 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 10:54:28 functional-461050 crio[5341]: time="2025-09-08 10:54:28.673814181Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2465502509fba98872645ee86e7b7d8d24543a54c460e7333981d0c67c75304,PodSandboxId:b92d8b377f41b9009d2317491536d1a9b4e3027b90da6eb12f35705b9f259883,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328273229849741,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-fw5qz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2,PodSandboxId:42c37503ba0d0ba35013a5a81b2843a343d04ca0f12f9bcd714b3255a67e856c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1757328261953824784,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-j2l4z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 005f62ec-d907-4634-8440-172c8ccf1a12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93315e08f71037755142dea223cde9aca88cd0a000ebe4e1cd3490d73b0e7f93,PodSandboxId:b3c23ffd124ef2034380713f8a51137c95dd59a1f2b9bda40090ad678d952ab2,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1757328253377907787,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c
4c-5q5bb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fee996c1-6cc1-40ab-ac79-b151d9dd80c0,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a,PodSandboxId:c52b97fd25574106f5f301cb15bc023284d6be134afebd2677aa35296eb110aa,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1757328247830561868,Labels:
map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 21916ec1-56c0-46e0-bdf9-d3c96579dfa2,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06082000e0b2469e28d026ee49234b5a3145eef847c318ad3250da974c37c61b,PodSandboxId:8916b083f63dc77532e408884e71d2a9c568b89ebc269e88228a660c87b93040,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757328241607127095,Labels:
map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-wq9fk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd5a8578-1c51-4e6b-8d77-4d87fce03552,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d,PodSandboxId:bef0ca7174e0c70088672416dbdc26326bde630bca37dbe6beb0fbd24092d015,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757328211864018997,Labels:map[string]string{io.kuberne
tes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa,PodSandboxId:7f6b2f8e008218d7a804e4ee5287fd631bc9363066303bb4eafd80b3ba11fd01,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757328211586335585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5,PodSandboxId:73b9e56c16e3fce1f101a13565883dbe4d17369865692aa1b1f469772dfda68e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},
Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757328211571950898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8,PodSandboxId:ed214e48ae62231e25573975f081b085b930bfbd839fb7388c01d31c13009055,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757328207854335152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80,PodSandboxId:d35943a7177356f18f1833370206365dc702ab989207d1d310f17c091a853fe5
,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757328207821862182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d856718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb85c2d4e963cea7ce40a051ab40d48
e2b71e73f1f2a083674e7f49f1a37cc7,PodSandboxId:512dc8cdc8af10e597dfa1be869a45d9334598a6f88f31d77240c82147fbd28f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757328207849080926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50abb8c50375d0fffaceb1a51106782,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4,PodSandboxId:9faab0c0964a359a16017488b57072b482b9cd1e747a7df662fbdb80b3cc648c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757328207806514510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec,PodSandboxId:4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1757328168220357549,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e924b08f-e5ee-4dce-a376-2ad37c5552fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807,PodSandboxId:a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757328167867400453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b2c4a9245ed10cb68fb667e38cfc5f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.
container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5,PodSandboxId:37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757328167878183595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d437c4d8
56718e00a20cde2c7e3ac68,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad,PodSandboxId:9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757328165103427696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66b
c5c9577-rhlvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c353654-7f8e-4829-9036-8590e8c92f15,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd,PodSandboxId:98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df8
71eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757328164423387996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zznjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6051e95a-99dc-43f5-95ea-02ad00ac17b7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf,PodSandboxId:26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d556
3dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757328164281881289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-461050,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c4f9e2b7583adb7ba45dc12ba2d33e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=415c3575-a3a6-4922-a634-55abc02ade01 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b2465502509fb       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            9 minutes ago       Running             echo-server                 0                   b92d8b377f41b       hello-node-connect-7d85dfc575-fw5qz
	a4b7a138b55d8       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   42c37503ba0d0       kubernetes-dashboard-855c9754f9-j2l4z
	93315e08f7103       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   b3c23ffd124ef       dashboard-metrics-scraper-77bf4d6c4c-5q5bb
	40061b06348f6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   c52b97fd25574       busybox-mount
	06082000e0b24       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            10 minutes ago      Running             echo-server                 0                   8916b083f63dc       hello-node-75c85bcc94-wq9fk
	764dd2644ce12       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     2                   bef0ca7174e0c       coredns-66bc5c9577-rhlvx
	48201cc965d2d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 10 minutes ago      Running             kube-proxy                  2                   7f6b2f8e00821       kube-proxy-zznjm
	19262bb7897fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   73b9e56c16e3f       storage-provisioner
	67f688e15e085       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Running             etcd                        2                   ed214e48ae622       etcd-functional-461050
	ccb85c2d4e963       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 11 minutes ago      Running             kube-apiserver              0                   512dc8cdc8af1       kube-apiserver-functional-461050
	3cb2e655e9e0f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 11 minutes ago      Running             kube-scheduler              3                   d35943a717735       kube-scheduler-functional-461050
	815c7a8e4c5c8       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 11 minutes ago      Running             kube-controller-manager     3                   9faab0c0964a3       kube-controller-manager-functional-461050
	4acbfa7eaf68d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   4b1bdc3a9221d       storage-provisioner
	70756be06d1d1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 11 minutes ago      Exited              kube-scheduler              2                   37fe97ef6f137       kube-scheduler-functional-461050
	6414bd6ed5137       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 11 minutes ago      Exited              kube-controller-manager     2                   a03ded27dfaaf       kube-controller-manager-functional-461050
	320c9a74122ef       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     1                   9379051f9794b       coredns-66bc5c9577-rhlvx
	3db4cd9e96f65       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 11 minutes ago      Exited              kube-proxy                  1                   98bf8a6568fde       kube-proxy-zznjm
	0358a97a79f3a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        1                   26fa07b45404b       etcd-functional-461050
	
	
	==> coredns [320c9a74122ef2acc756133339358d3c764029e0054f487cd7aef62039646fad] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54198 - 30337 "HINFO IN 5301851271321554043.7617544016246431185. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012923391s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [764dd2644ce126ad5d212d480330d4e7a2ad21ff35edf9f56003c72ed905054d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56984 - 36817 "HINFO IN 6878509914046829493.5908532982915739693. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015136575s
	
	
	==> describe nodes <==
	Name:               functional-461050
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-461050
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2
	                    minikube.k8s.io/name=functional-461050
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T10_41_54_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 10:41:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-461050
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 10:54:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 10:54:03 +0000   Mon, 08 Sep 2025 10:41:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 10:54:03 +0000   Mon, 08 Sep 2025 10:41:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 10:54:03 +0000   Mon, 08 Sep 2025 10:41:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 10:54:03 +0000   Mon, 08 Sep 2025 10:41:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    functional-461050
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 5514ec08cd9f46aa8b8ff6a001f5b7d6
	  System UUID:                5514ec08-cd9f-46aa-8b8f-f6a001f5b7d6
	  Boot ID:                    ea734251-d162-44bc-b246-e9ac04071e0d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wq9fk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-fw5qz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-gskqp                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-rhlvx                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-461050                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-461050              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-461050     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zznjm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-461050              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-5q5bb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j2l4z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-461050 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-461050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-461050 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-461050 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-461050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-461050 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node functional-461050 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-461050 event: Registered Node functional-461050 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-461050 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-461050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-461050 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-461050 event: Registered Node functional-461050 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-461050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-461050 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-461050 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-461050 event: Registered Node functional-461050 in Controller
	
	
	==> dmesg <==
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081351] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.090104] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.025206] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.718328] kauditd_printk_skb: 13 callbacks suppressed
	[Sep 8 10:42] kauditd_printk_skb: 248 callbacks suppressed
	[ +20.339698] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.119515] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.142244] kauditd_printk_skb: 313 callbacks suppressed
	[  +2.513734] kauditd_printk_skb: 67 callbacks suppressed
	[Sep 8 10:43] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.043386] kauditd_printk_skb: 167 callbacks suppressed
	[  +1.825130] kauditd_printk_skb: 192 callbacks suppressed
	[ +14.675467] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.210005] kauditd_printk_skb: 91 callbacks suppressed
	[Sep 8 10:44] kauditd_printk_skb: 173 callbacks suppressed
	[  +3.346734] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.770389] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.544685] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.366766] kauditd_printk_skb: 11 callbacks suppressed
	[  +3.441172] crun[9138]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +1.753922] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000045] kauditd_printk_skb: 35 callbacks suppressed
	[Sep 8 10:45] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [0358a97a79f3a575cb97190abe9a3af7adda10e4ac547cab73b0b10b48651cdf] <==
	{"level":"warn","ts":"2025-09-08T10:42:46.451371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.475369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.483316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.494964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.509173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.527497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:46.626324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56922","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T10:43:16.389880Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T10:43:16.389946Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-461050","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	{"level":"error","ts":"2025-09-08T10:43:16.390009Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T10:43:16.459343Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T10:43:16.460873Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T10:43:16.460913Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c23cd90330b5fc4f","current-leader-member-id":"c23cd90330b5fc4f"}
	{"level":"info","ts":"2025-09-08T10:43:16.460984Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T10:43:16.460992Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T10:43:16.460975Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T10:43:16.461041Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T10:43:16.461050Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T10:43:16.461087Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T10:43:16.461094Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T10:43:16.461100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.94:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T10:43:16.463931Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"error","ts":"2025-09-08T10:43:16.464009Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.94:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T10:43:16.464044Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2025-09-08T10:43:16.464052Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-461050","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	
	
	==> etcd [67f688e15e085c80ac521df58c003ef531911b349f559e8004178c272d0480c8] <==
	{"level":"warn","ts":"2025-09-08T10:43:29.909077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.919185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.929336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.945681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.968087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.975803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:29.985462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.010417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.026819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.035192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.046648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.056351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:30.145613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35650","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T10:44:21.768017Z","caller":"traceutil/trace.go:172","msg":"trace[111164942] linearizableReadLoop","detail":"{readStateIndex:918; appliedIndex:918; }","duration":"273.791376ms","start":"2025-09-08T10:44:21.494199Z","end":"2025-09-08T10:44:21.767990Z","steps":["trace[111164942] 'read index received'  (duration: 273.785768ms)","trace[111164942] 'applied index is now lower than readState.Index'  (duration: 4.773µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T10:44:21.768303Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"273.991406ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:44:21.768364Z","caller":"traceutil/trace.go:172","msg":"trace[206232410] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:836; }","duration":"274.157342ms","start":"2025-09-08T10:44:21.494195Z","end":"2025-09-08T10:44:21.768352Z","steps":["trace[206232410] 'agreement among raft nodes before linearized reading'  (duration: 273.965354ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:44:21.769140Z","caller":"traceutil/trace.go:172","msg":"trace[1596888450] transaction","detail":"{read_only:false; response_revision:837; number_of_response:1; }","duration":"414.592267ms","start":"2025-09-08T10:44:21.354539Z","end":"2025-09-08T10:44:21.769132Z","steps":["trace[1596888450] 'process raft request'  (duration: 414.010857ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:44:21.770862Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T10:44:21.354524Z","time spent":"414.790132ms","remote":"127.0.0.1:34848","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:836 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-09-08T10:45:06.299524Z","caller":"traceutil/trace.go:172","msg":"trace[2091774038] linearizableReadLoop","detail":"{readStateIndex:1019; appliedIndex:1019; }","duration":"124.59109ms","start":"2025-09-08T10:45:06.174903Z","end":"2025-09-08T10:45:06.299494Z","steps":["trace[2091774038] 'read index received'  (duration: 124.582756ms)","trace[2091774038] 'applied index is now lower than readState.Index'  (duration: 7.533µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T10:45:06.299645Z","caller":"traceutil/trace.go:172","msg":"trace[1387463592] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"221.289153ms","start":"2025-09-08T10:45:06.078345Z","end":"2025-09-08T10:45:06.299635Z","steps":["trace[1387463592] 'process raft request'  (duration: 221.191457ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:45:06.299696Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.746335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:45:06.299713Z","caller":"traceutil/trace.go:172","msg":"trace[1872582710] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:927; }","duration":"124.810577ms","start":"2025-09-08T10:45:06.174898Z","end":"2025-09-08T10:45:06.299708Z","steps":["trace[1872582710] 'agreement among raft nodes before linearized reading'  (duration: 124.723219ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:53:29.090364Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1137}
	{"level":"info","ts":"2025-09-08T10:53:29.114743Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1137,"took":"23.976819ms","hash":3795183914,"current-db-size-bytes":3547136,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-08T10:53:29.114794Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3795183914,"revision":1137,"compact-revision":-1}
	
	
	==> kernel <==
	 10:54:29 up 13 min,  0 users,  load average: 0.03, 0.12, 0.13
	Linux functional-461050 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ccb85c2d4e963cea7ce40a051ab40d48e2b71e73f1f2a083674e7f49f1a37cc7] <==
	I0908 10:43:56.701863       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.62.81"}
	I0908 10:43:59.614620       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 10:43:59.893592       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.179.41"}
	I0908 10:43:59.912005       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.210.240"}
	I0908 10:44:12.160749       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.232.226"}
	I0908 10:44:27.321315       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.42.130"}
	I0908 10:44:35.302034       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 10:44:37.285917       1 conn.go:339] Error on socket receive: read tcp 192.168.39.94:8441->192.168.39.1:59784: use of closed network connection
	I0908 10:44:39.666666       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:45:41.353971       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:45:44.507309       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:46:57.605662       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:47:10.109445       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:48:12.265675       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:48:27.292010       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:49:20.445739       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:49:31.355141       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:50:37.374111       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:50:56.463023       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:51:42.110737       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:52:25.823574       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:52:46.190059       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:53:30.910847       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 10:53:53.941390       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:54:14.924806       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [6414bd6ed5137a3502ef67d652675e110adaba8993aa61c6dc3012c540df8807] <==
	I0908 10:42:52.036410       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 10:42:52.036469       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-461050"
	I0908 10:42:52.036495       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 10:42:52.036529       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 10:42:52.040697       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0908 10:42:52.040765       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0908 10:42:52.040785       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 10:42:52.040791       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 10:42:52.040796       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 10:42:52.040904       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 10:42:52.043312       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 10:42:52.043429       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 10:42:52.044616       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 10:42:52.045794       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 10:42:52.048134       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 10:42:52.048202       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 10:42:52.054531       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 10:42:52.056809       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 10:42:52.056820       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 10:42:52.056825       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 10:42:52.061616       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 10:42:52.063597       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 10:42:52.065166       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 10:42:52.065291       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0908 10:42:52.083511       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [815c7a8e4c5c8720b576edb284e39e0d863d729771f7dd4a1e461b9ae83e65f4] <==
	I0908 10:43:34.384861       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0908 10:43:34.385035       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 10:43:34.386670       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 10:43:34.387696       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 10:43:34.388968       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 10:43:34.389102       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 10:43:34.391458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 10:43:34.391498       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 10:43:34.393897       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 10:43:34.408249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 10:43:34.413579       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 10:43:34.418925       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 10:43:34.421548       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 10:43:34.426857       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 10:43:34.433541       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 10:43:34.434776       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 10:43:34.434836       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 10:43:34.434860       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	E0908 10:43:59.726462       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.726698       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.739571       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.741059       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.750861       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.751145       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.763575       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3db4cd9e96f65d88d371f5e2538c657211027847b8520578c6a5c655cd4647fd] <==
	I0908 10:42:45.379281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 10:42:47.582380       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 10:42:47.582421       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.94"]
	E0908 10:42:47.582476       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 10:42:47.647294       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 10:42:47.647372       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 10:42:47.647395       1 server_linux.go:132] "Using iptables Proxier"
	I0908 10:42:47.661637       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 10:42:47.662524       1 server.go:527] "Version info" version="v1.34.0"
	I0908 10:42:47.662538       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:42:47.665344       1 config.go:106] "Starting endpoint slice config controller"
	I0908 10:42:47.672637       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 10:42:47.665696       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 10:42:47.672701       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 10:42:47.670525       1 config.go:200] "Starting service config controller"
	I0908 10:42:47.672730       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 10:42:47.673667       1 config.go:309] "Starting node config controller"
	I0908 10:42:47.675528       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 10:42:47.677288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 10:42:47.772980       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 10:42:47.773044       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 10:42:47.772816       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [48201cc965d2deac577f7cf7fabf4cddf0e1e451ded5b9deb3ab25cd29db69aa] <==
	I0908 10:43:31.923523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 10:43:32.024425       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 10:43:32.024467       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.94"]
	E0908 10:43:32.026200       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 10:43:32.097766       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 10:43:32.097861       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 10:43:32.097907       1 server_linux.go:132] "Using iptables Proxier"
	I0908 10:43:32.114530       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 10:43:32.115926       1 server.go:527] "Version info" version="v1.34.0"
	I0908 10:43:32.115974       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:43:32.126604       1 config.go:200] "Starting service config controller"
	I0908 10:43:32.126616       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 10:43:32.126630       1 config.go:106] "Starting endpoint slice config controller"
	I0908 10:43:32.126634       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 10:43:32.126642       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 10:43:32.126646       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 10:43:32.131390       1 config.go:309] "Starting node config controller"
	I0908 10:43:32.131529       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 10:43:32.131555       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 10:43:32.227296       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 10:43:32.227329       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 10:43:32.227561       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3cb2e655e9e0ff6f6e348aae4edb2419e68fd89b88835d0334f76e81e46a6c80] <==
	I0908 10:43:30.263359       1 serving.go:386] Generated self-signed cert in-memory
	I0908 10:43:31.021731       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 10:43:31.021824       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:43:31.033155       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 10:43:31.033316       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 10:43:31.033344       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 10:43:31.033362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 10:43:31.040168       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:31.040202       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:31.040261       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:31.040268       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:31.133903       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 10:43:31.141310       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:31.141630       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [70756be06d1d1d3af9e97e1e0ea4b8ed9e10b8272dd41a01cbb6d7bddd660af5] <==
	I0908 10:42:49.281354       1 serving.go:386] Generated self-signed cert in-memory
	I0908 10:42:50.464864       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 10:42:50.464904       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:42:50.482703       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 10:42:50.482857       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 10:42:50.482883       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 10:42:50.482917       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 10:42:50.493459       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:42:50.493495       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:42:50.493513       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:42:50.493517       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:42:50.582968       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 10:42:50.593922       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:42:50.594004       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:16.383153       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 10:43:16.387383       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 10:43:16.387619       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 10:43:16.387841       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:16.388046       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:16.388420       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0908 10:43:16.390176       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 10:43:16.392891       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 08 10:53:37 functional-461050 kubelet[5683]: E0908 10:53:37.449758    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328817448954930  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:53:43 functional-461050 kubelet[5683]: E0908 10:53:43.145200    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-gskqp" podUID="3b715ee3-43fc-4c1f-a057-a7722ba4ec27"
	Sep 08 10:53:46 functional-461050 kubelet[5683]: E0908 10:53:46.142910    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d3dbe5d-2821-433e-b310-606725bae985"
	Sep 08 10:53:47 functional-461050 kubelet[5683]: E0908 10:53:47.451317    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328827450930409  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:53:47 functional-461050 kubelet[5683]: E0908 10:53:47.451814    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328827450930409  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:53:57 functional-461050 kubelet[5683]: E0908 10:53:57.143951    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-gskqp" podUID="3b715ee3-43fc-4c1f-a057-a7722ba4ec27"
	Sep 08 10:53:57 functional-461050 kubelet[5683]: E0908 10:53:57.466059    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328837459289838  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:53:57 functional-461050 kubelet[5683]: E0908 10:53:57.466106    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328837459289838  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:54:00 functional-461050 kubelet[5683]: E0908 10:54:00.142475    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d3dbe5d-2821-433e-b310-606725bae985"
	Sep 08 10:54:07 functional-461050 kubelet[5683]: E0908 10:54:07.467829    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328847467415637  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:54:07 functional-461050 kubelet[5683]: E0908 10:54:07.467854    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328847467415637  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:54:12 functional-461050 kubelet[5683]: E0908 10:54:12.144306    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-gskqp" podUID="3b715ee3-43fc-4c1f-a057-a7722ba4ec27"
	Sep 08 10:54:15 functional-461050 kubelet[5683]: E0908 10:54:15.142743    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d3dbe5d-2821-433e-b310-606725bae985"
	Sep 08 10:54:17 functional-461050 kubelet[5683]: E0908 10:54:17.470514    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328857470135950  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:54:17 functional-461050 kubelet[5683]: E0908 10:54:17.470780    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328857470135950  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:54:25 functional-461050 kubelet[5683]: E0908 10:54:25.144291    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-gskqp" podUID="3b715ee3-43fc-4c1f-a057-a7722ba4ec27"
	Sep 08 10:54:27 functional-461050 kubelet[5683]: E0908 10:54:27.212108    5683 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1d437c4d856718e00a20cde2c7e3ac68/crio-37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e: Error finding container 37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e: Status 404 returned error can't find the container with id 37fe97ef6f13770b4f870e330f54c805e3a90ab96ef9c37573db5c2df6ea944e
	Sep 08 10:54:27 functional-461050 kubelet[5683]: E0908 10:54:27.212476    5683 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod8c353654-7f8e-4829-9036-8590e8c92f15/crio-9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c: Error finding container 9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c: Status 404 returned error can't find the container with id 9379051f9794bc67bdbda9734843ac8b18173c8c45b16c0a55cad41cddb9ff7c
	Sep 08 10:54:27 functional-461050 kubelet[5683]: E0908 10:54:27.213182    5683 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pode924b08f-e5ee-4dce-a376-2ad37c5552fb/crio-4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4: Error finding container 4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4: Status 404 returned error can't find the container with id 4b1bdc3a9221d8a33e2facf047e3c41f8a8a86fbed1869f51b8d11629de409f4
	Sep 08 10:54:27 functional-461050 kubelet[5683]: E0908 10:54:27.213552    5683 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6051e95a-99dc-43f5-95ea-02ad00ac17b7/crio-98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079: Error finding container 98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079: Status 404 returned error can't find the container with id 98bf8a6568fde33956b1706da31638dde3818be781782a082ce3027f9f7e4079
	Sep 08 10:54:27 functional-461050 kubelet[5683]: E0908 10:54:27.213882    5683 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb3b2c4a9245ed10cb68fb667e38cfc5f/crio-a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667: Error finding container a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667: Status 404 returned error can't find the container with id a03ded27dfaaf8e3b092152d17d30a4587016885c2d13004588dbe7034289667
	Sep 08 10:54:27 functional-461050 kubelet[5683]: E0908 10:54:27.214162    5683 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod85c4f9e2b7583adb7ba45dc12ba2d33e/crio-26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c: Error finding container 26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c: Status 404 returned error can't find the container with id 26fa07b45404b80680271d8cbe209c2cc89fbebc46cfc5b70d458b5d0d374a5c
	Sep 08 10:54:27 functional-461050 kubelet[5683]: E0908 10:54:27.474277    5683 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328867473799174  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:54:27 functional-461050 kubelet[5683]: E0908 10:54:27.474305    5683 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328867473799174  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:252821}  inodes_used:{value:120}}"
	Sep 08 10:54:29 functional-461050 kubelet[5683]: E0908 10:54:29.142795    5683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4d3dbe5d-2821-433e-b310-606725bae985"
	
	
	==> kubernetes-dashboard [a4b7a138b55d85b7581479c138021487114aa6126446a029bb63a39f4e6545f2] <==
	2025/09/08 10:44:22 Using namespace: kubernetes-dashboard
	2025/09/08 10:44:22 Using in-cluster config to connect to apiserver
	2025/09/08 10:44:22 Using secret token for csrf signing
	2025/09/08 10:44:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/08 10:44:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/08 10:44:22 Successful initial request to the apiserver, version: v1.34.0
	2025/09/08 10:44:22 Generating JWE encryption key
	2025/09/08 10:44:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/08 10:44:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/08 10:44:22 Initializing JWE encryption key from synchronized object
	2025/09/08 10:44:22 Creating in-cluster Sidecar client
	2025/09/08 10:44:22 Successful request to sidecar
	2025/09/08 10:44:22 Serving insecurely on HTTP port: 9090
	2025/09/08 10:44:22 Starting overwatch
	
	
	==> storage-provisioner [19262bb7897fb688d354ffbdf7d90a28a8c08859fdd19204596e20ffd279cbb5] <==
	W0908 10:54:05.035755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:07.040014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:07.045168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:09.048890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:09.053732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:11.057556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:11.067754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:13.073711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:13.083931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:15.087117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:15.092845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:17.096936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:17.105361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:19.108710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:19.114590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:21.117932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:21.122817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:23.125682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:23.130562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:25.133489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:25.141026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:27.146148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:27.151349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:29.157307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:54:29.164651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4acbfa7eaf68db26f7765249fd11606155157780e28c33889fd76647bf042dec] <==
	I0908 10:42:48.380312       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 10:42:48.380361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 10:42:48.391572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:51.846747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:56.109649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:59.708473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:02.763191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:05.785336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:05.790953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 10:43:05.791069       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 10:43:05.792034       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"91dc6c72-6e62-464a-b160-6bf12ed3eb48", APIVersion:"v1", ResourceVersion:"539", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-461050_b72e6edd-2e6b-4aaa-85e6-7d7bacf275aa became leader
	I0908 10:43:05.792543       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-461050_b72e6edd-2e6b-4aaa-85e6-7d7bacf275aa!
	W0908 10:43:05.793879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:05.802445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 10:43:05.893185       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-461050_b72e6edd-2e6b-4aaa-85e6-7d7bacf275aa!
	W0908 10:43:07.805670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:07.811422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:09.814845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:09.818865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:11.822319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:11.831207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:13.834021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:13.840989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:15.844885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:43:15.850486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-461050 -n functional-461050
helpers_test.go:269: (dbg) Run:  kubectl --context functional-461050 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-gskqp sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-461050 describe pod busybox-mount mysql-5bb876957f-gskqp sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-461050 describe pod busybox-mount mysql-5bb876957f-gskqp sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-461050/192.168.39.94
	Start Time:       Mon, 08 Sep 2025 10:43:58 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://40061b06348f6c9a47b4a10d250373f937c60b05710d0998dcc85af69567010a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 10:44:07 +0000
	      Finished:     Mon, 08 Sep 2025 10:44:07 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zg898 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zg898:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-461050
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 6.222s (8.398s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-gskqp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-461050/192.168.39.94
	Start Time:       Mon, 08 Sep 2025 10:44:27 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glh28 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-glh28:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-gskqp to functional-461050
	  Warning  Failed     7m8s (x2 over 8m15s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m45s (x5 over 9m58s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m39s (x3 over 9m22s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m39s (x5 over 9m22s)  kubelet            Error: ErrImagePull
	  Warning  Failed     84s (x16 over 9m22s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    17s (x21 over 9m22s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-461050/192.168.39.94
	Start Time:       Mon, 08 Sep 2025 10:44:37 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cjrnx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cjrnx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m52s                 default-scheduler  Successfully assigned default/sp-pod to functional-461050
	  Warning  Failed     7m42s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m2s (x5 over 9m52s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m7s (x4 over 8m50s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m7s (x5 over 8m50s)  kubelet            Error: ErrImagePull
	  Warning  Failed     59s (x16 over 8m49s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x20 over 8m49s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.83s)

                                                
                                    
x
+
TestPreload (149.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-493455 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-493455 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m7.519946207s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-493455 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-493455 image pull gcr.io/k8s-minikube/busybox: (6.531045254s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-493455
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-493455: (7.300669239s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-493455 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0908 11:38:30.883234  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:38:56.713455  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-493455 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.074065414s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-493455 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-08 11:39:34.491006767 +0000 UTC m=+4226.453409960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-493455 -n test-preload-493455
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-493455 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-493455 logs -n 25: (1.094913449s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-064020 ssh -n multinode-064020-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ ssh     │ multinode-064020 ssh -n multinode-064020 sudo cat /home/docker/cp-test_multinode-064020-m03_multinode-064020.txt                                          │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ cp      │ multinode-064020 cp multinode-064020-m03:/home/docker/cp-test.txt multinode-064020-m02:/home/docker/cp-test_multinode-064020-m03_multinode-064020-m02.txt │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ ssh     │ multinode-064020 ssh -n multinode-064020-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:24 UTC │
	│ ssh     │ multinode-064020 ssh -n multinode-064020-m02 sudo cat /home/docker/cp-test_multinode-064020-m03_multinode-064020-m02.txt                                  │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:24 UTC │ 08 Sep 25 11:25 UTC │
	│ node    │ multinode-064020 node stop m03                                                                                                                            │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:25 UTC │ 08 Sep 25 11:25 UTC │
	│ node    │ multinode-064020 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:25 UTC │ 08 Sep 25 11:25 UTC │
	│ node    │ list -p multinode-064020                                                                                                                                  │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:25 UTC │                     │
	│ stop    │ -p multinode-064020                                                                                                                                       │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:25 UTC │ 08 Sep 25 11:28 UTC │
	│ start   │ -p multinode-064020 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:28 UTC │ 08 Sep 25 11:31 UTC │
	│ node    │ list -p multinode-064020                                                                                                                                  │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:31 UTC │                     │
	│ node    │ multinode-064020 node delete m03                                                                                                                          │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:31 UTC │ 08 Sep 25 11:31 UTC │
	│ stop    │ multinode-064020 stop                                                                                                                                     │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:31 UTC │ 08 Sep 25 11:34 UTC │
	│ start   │ -p multinode-064020 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:34 UTC │ 08 Sep 25 11:36 UTC │
	│ node    │ list -p multinode-064020                                                                                                                                  │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │                     │
	│ start   │ -p multinode-064020-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-064020-m02 │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │                     │
	│ start   │ -p multinode-064020-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-064020-m03 │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:37 UTC │
	│ node    │ add -p multinode-064020                                                                                                                                   │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:37 UTC │                     │
	│ delete  │ -p multinode-064020-m03                                                                                                                                   │ multinode-064020-m03 │ jenkins │ v1.36.0 │ 08 Sep 25 11:37 UTC │ 08 Sep 25 11:37 UTC │
	│ delete  │ -p multinode-064020                                                                                                                                       │ multinode-064020     │ jenkins │ v1.36.0 │ 08 Sep 25 11:37 UTC │ 08 Sep 25 11:37 UTC │
	│ start   │ -p test-preload-493455 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-493455  │ jenkins │ v1.36.0 │ 08 Sep 25 11:37 UTC │ 08 Sep 25 11:38 UTC │
	│ image   │ test-preload-493455 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-493455  │ jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:38 UTC │
	│ stop    │ -p test-preload-493455                                                                                                                                    │ test-preload-493455  │ jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:38 UTC │
	│ start   │ -p test-preload-493455 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-493455  │ jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:39 UTC │
	│ image   │ test-preload-493455 image list                                                                                                                            │ test-preload-493455  │ jenkins │ v1.36.0 │ 08 Sep 25 11:39 UTC │ 08 Sep 25 11:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:38:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:38:29.239839  786575 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:38:29.239958  786575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:38:29.239967  786575 out.go:374] Setting ErrFile to fd 2...
	I0908 11:38:29.239971  786575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:38:29.240158  786575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 11:38:29.240701  786575 out.go:368] Setting JSON to false
	I0908 11:38:29.241582  786575 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":73225,"bootTime":1757258284,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:38:29.241704  786575 start.go:140] virtualization: kvm guest
	I0908 11:38:29.243813  786575 out.go:179] * [test-preload-493455] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:38:29.245105  786575 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 11:38:29.245134  786575 notify.go:220] Checking for updates...
	I0908 11:38:29.247328  786575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:38:29.248806  786575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:38:29.250029  786575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 11:38:29.251264  786575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:38:29.252505  786575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:38:29.254004  786575 config.go:182] Loaded profile config "test-preload-493455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0908 11:38:29.254364  786575 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:38:29.254432  786575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:38:29.269431  786575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0908 11:38:29.269847  786575 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:38:29.270325  786575 main.go:141] libmachine: Using API Version  1
	I0908 11:38:29.270349  786575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:38:29.270719  786575 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:38:29.270905  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:38:29.272590  786575 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0908 11:38:29.273763  786575 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:38:29.274060  786575 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:38:29.274094  786575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:38:29.289023  786575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39779
	I0908 11:38:29.289454  786575 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:38:29.289817  786575 main.go:141] libmachine: Using API Version  1
	I0908 11:38:29.289836  786575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:38:29.290163  786575 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:38:29.290342  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:38:29.325130  786575 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 11:38:29.326283  786575 start.go:304] selected driver: kvm2
	I0908 11:38:29.326296  786575 start.go:918] validating driver "kvm2" against &{Name:test-preload-493455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-493455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:38:29.326411  786575 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:38:29.327153  786575 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:38:29.327236  786575 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21503-748170/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 11:38:29.342246  786575 install.go:137] /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 11:38:29.342632  786575 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:38:29.342664  786575 cni.go:84] Creating CNI manager for ""
	I0908 11:38:29.342708  786575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:38:29.342790  786575 start.go:348] cluster config:
	{Name:test-preload-493455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-493455 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:38:29.342907  786575 iso.go:125] acquiring lock: {Name:mk013a3bcd14eba8870ec8e08630600588ab11c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:38:29.344551  786575 out.go:179] * Starting "test-preload-493455" primary control-plane node in "test-preload-493455" cluster
	I0908 11:38:29.345702  786575 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0908 11:38:29.958072  786575 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0908 11:38:29.958133  786575 cache.go:58] Caching tarball of preloaded images
	I0908 11:38:29.958360  786575 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0908 11:38:29.960346  786575 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0908 11:38:29.961567  786575 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 11:38:30.120039  786575 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0908 11:38:43.989547  786575 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 11:38:43.989656  786575 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 11:38:44.738783  786575 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0908 11:38:44.738916  786575 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/config.json ...
	I0908 11:38:44.739159  786575 start.go:360] acquireMachinesLock for test-preload-493455: {Name:mkc620e3900da426b9c156141af1783a234a8bd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 11:38:44.739231  786575 start.go:364] duration metric: took 47.524µs to acquireMachinesLock for "test-preload-493455"
	I0908 11:38:44.739248  786575 start.go:96] Skipping create...Using existing machine configuration
	I0908 11:38:44.739253  786575 fix.go:54] fixHost starting: 
	I0908 11:38:44.739575  786575 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:38:44.739613  786575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:38:44.754439  786575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I0908 11:38:44.754882  786575 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:38:44.755300  786575 main.go:141] libmachine: Using API Version  1
	I0908 11:38:44.755324  786575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:38:44.755639  786575 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:38:44.755839  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:38:44.755951  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetState
	I0908 11:38:44.757600  786575 fix.go:112] recreateIfNeeded on test-preload-493455: state=Stopped err=<nil>
	I0908 11:38:44.757634  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	W0908 11:38:44.757780  786575 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 11:38:44.760289  786575 out.go:252] * Restarting existing kvm2 VM for "test-preload-493455" ...
	I0908 11:38:44.760318  786575 main.go:141] libmachine: (test-preload-493455) Calling .Start
	I0908 11:38:44.760511  786575 main.go:141] libmachine: (test-preload-493455) starting domain...
	I0908 11:38:44.760531  786575 main.go:141] libmachine: (test-preload-493455) ensuring networks are active...
	I0908 11:38:44.761305  786575 main.go:141] libmachine: (test-preload-493455) Ensuring network default is active
	I0908 11:38:44.761671  786575 main.go:141] libmachine: (test-preload-493455) Ensuring network mk-test-preload-493455 is active
	I0908 11:38:44.762070  786575 main.go:141] libmachine: (test-preload-493455) getting domain XML...
	I0908 11:38:44.762807  786575 main.go:141] libmachine: (test-preload-493455) creating domain...
	I0908 11:38:45.977399  786575 main.go:141] libmachine: (test-preload-493455) waiting for IP...
	I0908 11:38:45.978556  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:45.978999  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:45.979122  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:45.979004  786658 retry.go:31] will retry after 265.871897ms: waiting for domain to come up
	I0908 11:38:46.246510  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:46.246940  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:46.246966  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:46.246911  786658 retry.go:31] will retry after 274.458718ms: waiting for domain to come up
	I0908 11:38:46.523828  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:46.524203  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:46.524230  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:46.524165  786658 retry.go:31] will retry after 438.073174ms: waiting for domain to come up
	I0908 11:38:46.963689  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:46.964152  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:46.964196  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:46.964138  786658 retry.go:31] will retry after 514.68706ms: waiting for domain to come up
	I0908 11:38:47.480971  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:47.481496  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:47.481529  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:47.481451  786658 retry.go:31] will retry after 713.204126ms: waiting for domain to come up
	I0908 11:38:48.196485  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:48.196884  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:48.196938  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:48.196858  786658 retry.go:31] will retry after 586.851779ms: waiting for domain to come up
	I0908 11:38:48.785919  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:48.786444  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:48.786539  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:48.786426  786658 retry.go:31] will retry after 894.317046ms: waiting for domain to come up
	I0908 11:38:49.682332  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:49.682690  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:49.682713  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:49.682666  786658 retry.go:31] will retry after 1.274431233s: waiting for domain to come up
	I0908 11:38:50.958568  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:50.959116  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:50.959156  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:50.959088  786658 retry.go:31] will retry after 1.437017185s: waiting for domain to come up
	I0908 11:38:52.397863  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:52.398294  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:52.398326  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:52.398251  786658 retry.go:31] will retry after 2.120824459s: waiting for domain to come up
	I0908 11:38:54.521628  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:54.522074  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:54.522106  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:54.522024  786658 retry.go:31] will retry after 2.324488543s: waiting for domain to come up
	I0908 11:38:56.847990  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:56.848388  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:56.848416  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:56.848348  786658 retry.go:31] will retry after 2.339077156s: waiting for domain to come up
	I0908 11:38:59.188545  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:38:59.189064  786575 main.go:141] libmachine: (test-preload-493455) DBG | unable to find current IP address of domain test-preload-493455 in network mk-test-preload-493455
	I0908 11:38:59.189107  786575 main.go:141] libmachine: (test-preload-493455) DBG | I0908 11:38:59.189028  786658 retry.go:31] will retry after 4.451828026s: waiting for domain to come up
	I0908 11:39:03.645492  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.645922  786575 main.go:141] libmachine: (test-preload-493455) found domain IP: 192.168.39.62
	I0908 11:39:03.645950  786575 main.go:141] libmachine: (test-preload-493455) reserving static IP address...
	I0908 11:39:03.645968  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has current primary IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.646423  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "test-preload-493455", mac: "52:54:00:eb:de:d9", ip: "192.168.39.62"} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:03.646447  786575 main.go:141] libmachine: (test-preload-493455) reserved static IP address 192.168.39.62 for domain test-preload-493455
	I0908 11:39:03.646460  786575 main.go:141] libmachine: (test-preload-493455) DBG | skip adding static IP to network mk-test-preload-493455 - found existing host DHCP lease matching {name: "test-preload-493455", mac: "52:54:00:eb:de:d9", ip: "192.168.39.62"}
	I0908 11:39:03.646471  786575 main.go:141] libmachine: (test-preload-493455) DBG | Getting to WaitForSSH function...
	I0908 11:39:03.646512  786575 main.go:141] libmachine: (test-preload-493455) waiting for SSH...
	I0908 11:39:03.649146  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.649496  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:03.649518  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.649689  786575 main.go:141] libmachine: (test-preload-493455) DBG | Using SSH client type: external
	I0908 11:39:03.649730  786575 main.go:141] libmachine: (test-preload-493455) DBG | Using SSH private key: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/test-preload-493455/id_rsa (-rw-------)
	I0908 11:39:03.649770  786575 main.go:141] libmachine: (test-preload-493455) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21503-748170/.minikube/machines/test-preload-493455/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 11:39:03.649788  786575 main.go:141] libmachine: (test-preload-493455) DBG | About to run SSH command:
	I0908 11:39:03.649803  786575 main.go:141] libmachine: (test-preload-493455) DBG | exit 0
	I0908 11:39:03.773836  786575 main.go:141] libmachine: (test-preload-493455) DBG | SSH cmd err, output: <nil>: 
	I0908 11:39:03.774200  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetConfigRaw
	I0908 11:39:03.774834  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetIP
	I0908 11:39:03.777432  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.777727  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:03.777751  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.777966  786575 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/config.json ...
	I0908 11:39:03.778162  786575 machine.go:93] provisionDockerMachine start ...
	I0908 11:39:03.778182  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:39:03.778414  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:03.780522  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.780919  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:03.780952  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.781072  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:03.781277  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:03.781460  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:03.781607  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:03.781759  786575 main.go:141] libmachine: Using SSH client type: native
	I0908 11:39:03.782068  786575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0908 11:39:03.782082  786575 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:39:03.890174  786575 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:39:03.890207  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetMachineName
	I0908 11:39:03.890496  786575 buildroot.go:166] provisioning hostname "test-preload-493455"
	I0908 11:39:03.890530  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetMachineName
	I0908 11:39:03.890725  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:03.893888  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.894296  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:03.894324  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:03.894475  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:03.894662  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:03.894806  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:03.894914  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:03.895087  786575 main.go:141] libmachine: Using SSH client type: native
	I0908 11:39:03.895296  786575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0908 11:39:03.895310  786575 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-493455 && echo "test-preload-493455" | sudo tee /etc/hostname
	I0908 11:39:04.023543  786575 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-493455
	
	I0908 11:39:04.023574  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:04.026537  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.026859  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.026883  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.027078  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:04.027284  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.027464  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.027616  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:04.027773  786575 main.go:141] libmachine: Using SSH client type: native
	I0908 11:39:04.027975  786575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0908 11:39:04.027990  786575 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-493455' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-493455/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-493455' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:39:04.143938  786575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:39:04.143979  786575 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21503-748170/.minikube CaCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21503-748170/.minikube}
	I0908 11:39:04.144019  786575 buildroot.go:174] setting up certificates
	I0908 11:39:04.144037  786575 provision.go:84] configureAuth start
	I0908 11:39:04.144050  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetMachineName
	I0908 11:39:04.144422  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetIP
	I0908 11:39:04.147192  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.147478  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.147533  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.147654  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:04.149691  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.149977  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.150014  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.150101  786575 provision.go:143] copyHostCerts
	I0908 11:39:04.150176  786575 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem, removing ...
	I0908 11:39:04.150195  786575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem
	I0908 11:39:04.150269  786575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem (1078 bytes)
	I0908 11:39:04.150371  786575 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem, removing ...
	I0908 11:39:04.150383  786575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem
	I0908 11:39:04.150443  786575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem (1123 bytes)
	I0908 11:39:04.150520  786575 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem, removing ...
	I0908 11:39:04.150530  786575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem
	I0908 11:39:04.150558  786575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem (1675 bytes)
	I0908 11:39:04.150624  786575 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem org=jenkins.test-preload-493455 san=[127.0.0.1 192.168.39.62 localhost minikube test-preload-493455]
	I0908 11:39:04.321377  786575 provision.go:177] copyRemoteCerts
	I0908 11:39:04.321447  786575 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:39:04.321474  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:04.324419  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.324787  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.324827  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.325009  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:04.325341  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.325545  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:04.325691  786575 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/test-preload-493455/id_rsa Username:docker}
	I0908 11:39:04.409626  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 11:39:04.438603  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0908 11:39:04.466799  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 11:39:04.494923  786575 provision.go:87] duration metric: took 350.869795ms to configureAuth
	I0908 11:39:04.494956  786575 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:39:04.495145  786575 config.go:182] Loaded profile config "test-preload-493455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0908 11:39:04.495241  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:04.498176  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.498519  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.498548  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.498687  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:04.498885  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.499053  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.499173  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:04.499329  786575 main.go:141] libmachine: Using SSH client type: native
	I0908 11:39:04.499525  786575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0908 11:39:04.499541  786575 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:39:04.740669  786575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:39:04.740700  786575 machine.go:96] duration metric: took 962.524549ms to provisionDockerMachine
	I0908 11:39:04.740712  786575 start.go:293] postStartSetup for "test-preload-493455" (driver="kvm2")
	I0908 11:39:04.740722  786575 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:39:04.740740  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:39:04.741140  786575 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:39:04.741173  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:04.743963  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.744329  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.744366  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.744485  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:04.744678  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.744853  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:04.744979  786575 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/test-preload-493455/id_rsa Username:docker}
	I0908 11:39:04.830156  786575 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:39:04.834985  786575 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:39:04.835011  786575 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/addons for local assets ...
	I0908 11:39:04.835109  786575 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/files for local assets ...
	I0908 11:39:04.835208  786575 filesync.go:149] local asset: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem -> 7523322.pem in /etc/ssl/certs
	I0908 11:39:04.835318  786575 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:39:04.847176  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:39:04.876857  786575 start.go:296] duration metric: took 136.127679ms for postStartSetup
	I0908 11:39:04.876914  786575 fix.go:56] duration metric: took 20.137655045s for fixHost
	I0908 11:39:04.876935  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:04.879660  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.880128  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.880157  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.880356  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:04.880571  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.880751  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.880872  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:04.881004  786575 main.go:141] libmachine: Using SSH client type: native
	I0908 11:39:04.881223  786575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0908 11:39:04.881252  786575 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:39:04.986915  786575 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757331544.942756493
	
	I0908 11:39:04.986945  786575 fix.go:216] guest clock: 1757331544.942756493
	I0908 11:39:04.986954  786575 fix.go:229] Guest: 2025-09-08 11:39:04.942756493 +0000 UTC Remote: 2025-09-08 11:39:04.87691768 +0000 UTC m=+35.675098547 (delta=65.838813ms)
	I0908 11:39:04.987001  786575 fix.go:200] guest clock delta is within tolerance: 65.838813ms
	I0908 11:39:04.987008  786575 start.go:83] releasing machines lock for "test-preload-493455", held for 20.247767131s
	I0908 11:39:04.987038  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:39:04.987366  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetIP
	I0908 11:39:04.990211  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.990572  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.990601  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.990720  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:39:04.991174  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:39:04.991356  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:39:04.991477  786575 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:39:04.991536  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:04.991597  786575 ssh_runner.go:195] Run: cat /version.json
	I0908 11:39:04.991626  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:04.994164  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.994396  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.994582  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.994618  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.994761  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:04.994826  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:04.994856  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:04.994924  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.994991  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:04.995070  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:04.995134  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:04.995187  786575 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/test-preload-493455/id_rsa Username:docker}
	I0908 11:39:04.995248  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:04.995373  786575 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/test-preload-493455/id_rsa Username:docker}
	I0908 11:39:05.102326  786575 ssh_runner.go:195] Run: systemctl --version
	I0908 11:39:05.108888  786575 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:39:05.255366  786575 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 11:39:05.262324  786575 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 11:39:05.262401  786575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:39:05.283005  786575 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:39:05.283043  786575 start.go:495] detecting cgroup driver to use...
	I0908 11:39:05.283120  786575 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:39:05.301961  786575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:39:05.318325  786575 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:39:05.318410  786575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:39:05.334139  786575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:39:05.350158  786575 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:39:05.495410  786575 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:39:05.636777  786575 docker.go:234] disabling docker service ...
	I0908 11:39:05.636869  786575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:39:05.653066  786575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:39:05.667573  786575 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:39:05.883473  786575 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:39:06.016353  786575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:39:06.032282  786575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:39:06.054010  786575 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0908 11:39:06.054083  786575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:39:06.068401  786575 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:39:06.068483  786575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:39:06.082422  786575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:39:06.094297  786575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:39:06.105708  786575 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:39:06.117872  786575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:39:06.129296  786575 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:39:06.148801  786575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:39:06.160472  786575 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:39:06.170316  786575 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:39:06.170375  786575 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:39:06.190219  786575 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:39:06.201346  786575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:39:06.346424  786575 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:39:06.464647  786575 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:39:06.464745  786575 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:39:06.470215  786575 start.go:563] Will wait 60s for crictl version
	I0908 11:39:06.470283  786575 ssh_runner.go:195] Run: which crictl
	I0908 11:39:06.474316  786575 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:39:06.518432  786575 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 11:39:06.518544  786575 ssh_runner.go:195] Run: crio --version
	I0908 11:39:06.548010  786575 ssh_runner.go:195] Run: crio --version
	I0908 11:39:06.578651  786575 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0908 11:39:06.579882  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetIP
	I0908 11:39:06.582522  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:06.582875  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:06.582900  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:06.583110  786575 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 11:39:06.587711  786575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:39:06.603306  786575 kubeadm.go:875] updating cluster {Name:test-preload-493455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-493455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:39:06.603415  786575 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0908 11:39:06.603461  786575 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:39:06.641443  786575 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0908 11:39:06.641507  786575 ssh_runner.go:195] Run: which lz4
	I0908 11:39:06.646104  786575 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 11:39:06.650939  786575 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 11:39:06.650969  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0908 11:39:08.130346  786575 crio.go:462] duration metric: took 1.484294329s to copy over tarball
	I0908 11:39:08.130459  786575 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 11:39:09.831872  786575 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.701369533s)
	I0908 11:39:09.831910  786575 crio.go:469] duration metric: took 1.701525983s to extract the tarball
	I0908 11:39:09.831921  786575 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 11:39:09.872096  786575 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:39:09.916730  786575 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:39:09.916756  786575 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:39:09.916764  786575 kubeadm.go:926] updating node { 192.168.39.62 8443 v1.32.0 crio true true} ...
	I0908 11:39:09.916887  786575 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-493455 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-493455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:39:09.916964  786575 ssh_runner.go:195] Run: crio config
	I0908 11:39:09.963794  786575 cni.go:84] Creating CNI manager for ""
	I0908 11:39:09.963821  786575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:39:09.963833  786575 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:39:09.963854  786575 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.62 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-493455 NodeName:test-preload-493455 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:39:09.963970  786575 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-493455"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:39:09.964032  786575 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0908 11:39:09.976639  786575 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:39:09.976699  786575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:39:09.988103  786575 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0908 11:39:10.007701  786575 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:39:10.027347  786575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0908 11:39:10.047483  786575 ssh_runner.go:195] Run: grep 192.168.39.62	control-plane.minikube.internal$ /etc/hosts
	I0908 11:39:10.051545  786575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:39:10.066001  786575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:39:10.204385  786575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:39:10.250585  786575 certs.go:68] Setting up /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455 for IP: 192.168.39.62
	I0908 11:39:10.250615  786575 certs.go:194] generating shared ca certs ...
	I0908 11:39:10.250640  786575 certs.go:226] acquiring lock for ca certs: {Name:mkaa8fe7cb1fe9bdb745b85589d42151c557e20e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:39:10.250824  786575 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key
	I0908 11:39:10.250890  786575 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key
	I0908 11:39:10.250903  786575 certs.go:256] generating profile certs ...
	I0908 11:39:10.250996  786575 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/client.key
	I0908 11:39:10.251080  786575 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/apiserver.key.a917c345
	I0908 11:39:10.251160  786575 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/proxy-client.key
	I0908 11:39:10.251297  786575 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem (1338 bytes)
	W0908 11:39:10.251343  786575 certs.go:480] ignoring /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332_empty.pem, impossibly tiny 0 bytes
	I0908 11:39:10.251357  786575 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 11:39:10.251391  786575 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem (1078 bytes)
	I0908 11:39:10.251424  786575 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:39:10.251458  786575 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem (1675 bytes)
	I0908 11:39:10.251512  786575 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:39:10.252073  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:39:10.288946  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:39:10.330106  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:39:10.361638  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 11:39:10.389969  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0908 11:39:10.418063  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 11:39:10.446024  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:39:10.474712  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 11:39:10.503785  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /usr/share/ca-certificates/7523322.pem (1708 bytes)
	I0908 11:39:10.531805  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:39:10.559960  786575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem --> /usr/share/ca-certificates/752332.pem (1338 bytes)
	I0908 11:39:10.587802  786575 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:39:10.608798  786575 ssh_runner.go:195] Run: openssl version
	I0908 11:39:10.615469  786575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752332.pem && ln -fs /usr/share/ca-certificates/752332.pem /etc/ssl/certs/752332.pem"
	I0908 11:39:10.628408  786575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752332.pem
	I0908 11:39:10.633563  786575 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:41 /usr/share/ca-certificates/752332.pem
	I0908 11:39:10.633620  786575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752332.pem
	I0908 11:39:10.640752  786575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752332.pem /etc/ssl/certs/51391683.0"
	I0908 11:39:10.653524  786575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7523322.pem && ln -fs /usr/share/ca-certificates/7523322.pem /etc/ssl/certs/7523322.pem"
	I0908 11:39:10.666169  786575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7523322.pem
	I0908 11:39:10.671487  786575 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:41 /usr/share/ca-certificates/7523322.pem
	I0908 11:39:10.671540  786575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7523322.pem
	I0908 11:39:10.678840  786575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7523322.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:39:10.691628  786575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:39:10.704413  786575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:39:10.709481  786575 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:39:10.709530  786575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:39:10.716541  786575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:39:10.729259  786575 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:39:10.734404  786575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 11:39:10.741542  786575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 11:39:10.748503  786575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 11:39:10.755703  786575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 11:39:10.762573  786575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 11:39:10.769428  786575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 11:39:10.776535  786575 kubeadm.go:392] StartCluster: {Name:test-preload-493455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-493455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:39:10.776623  786575 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:39:10.776676  786575 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:39:10.818328  786575 cri.go:89] found id: ""
	I0908 11:39:10.818435  786575 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:39:10.830914  786575 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 11:39:10.830940  786575 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 11:39:10.831002  786575 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 11:39:10.842940  786575 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:39:10.843429  786575 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-493455" does not appear in /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:39:10.843543  786575 kubeconfig.go:62] /home/jenkins/minikube-integration/21503-748170/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-493455" cluster setting kubeconfig missing "test-preload-493455" context setting]
	I0908 11:39:10.843825  786575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:39:10.844384  786575 kapi.go:59] client config for test-preload-493455: &rest.Config{Host:"https://192.168.39.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/client.crt", KeyFile:"/home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/client.key", CAFile:"/home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 11:39:10.844797  786575 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0908 11:39:10.844812  786575 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0908 11:39:10.844816  786575 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0908 11:39:10.844822  786575 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0908 11:39:10.844828  786575 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0908 11:39:10.845210  786575 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 11:39:10.857056  786575 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.62
	I0908 11:39:10.857088  786575 kubeadm.go:1152] stopping kube-system containers ...
	I0908 11:39:10.857104  786575 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0908 11:39:10.857164  786575 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:39:10.895894  786575 cri.go:89] found id: ""
	I0908 11:39:10.895965  786575 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 11:39:10.915393  786575 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:39:10.927949  786575 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:39:10.927974  786575 kubeadm.go:157] found existing configuration files:
	
	I0908 11:39:10.928032  786575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 11:39:10.939118  786575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:39:10.939195  786575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:39:10.951057  786575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 11:39:10.962120  786575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:39:10.962190  786575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:39:10.973897  786575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 11:39:10.985015  786575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:39:10.985072  786575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:39:10.996651  786575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 11:39:11.007508  786575 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:39:11.007574  786575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:39:11.018776  786575 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:39:11.030295  786575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:39:11.086381  786575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:39:11.810714  786575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:39:12.066159  786575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:39:12.136009  786575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:39:12.212712  786575 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:39:12.212811  786575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:39:12.713825  786575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:39:13.213538  786575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:39:13.713768  786575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:39:14.213563  786575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:39:14.713235  786575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:39:14.743548  786575 api_server.go:72] duration metric: took 2.530833194s to wait for apiserver process to appear ...
	I0908 11:39:14.743592  786575 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:39:14.743619  786575 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0908 11:39:17.499511  786575 api_server.go:279] https://192.168.39.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:39:17.499553  786575 api_server.go:103] status: https://192.168.39.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:39:17.499574  786575 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0908 11:39:17.528910  786575 api_server.go:279] https://192.168.39.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:39:17.528941  786575 api_server.go:103] status: https://192.168.39.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:39:17.744438  786575 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0908 11:39:17.750742  786575 api_server.go:279] https://192.168.39.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:39:17.750775  786575 api_server.go:103] status: https://192.168.39.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:39:18.244483  786575 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0908 11:39:18.255237  786575 api_server.go:279] https://192.168.39.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:39:18.255265  786575 api_server.go:103] status: https://192.168.39.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:39:18.743924  786575 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0908 11:39:18.752935  786575 api_server.go:279] https://192.168.39.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:39:18.753012  786575 api_server.go:103] status: https://192.168.39.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:39:19.244004  786575 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0908 11:39:19.249874  786575 api_server.go:279] https://192.168.39.62:8443/healthz returned 200:
	ok
	I0908 11:39:19.257632  786575 api_server.go:141] control plane version: v1.32.0
	I0908 11:39:19.257656  786575 api_server.go:131] duration metric: took 4.514056893s to wait for apiserver health ...
	I0908 11:39:19.257666  786575 cni.go:84] Creating CNI manager for ""
	I0908 11:39:19.257673  786575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:39:19.259134  786575 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 11:39:19.260240  786575 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 11:39:19.278055  786575 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 11:39:19.301221  786575 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:39:19.310814  786575 system_pods.go:59] 7 kube-system pods found
	I0908 11:39:19.310846  786575 system_pods.go:61] "coredns-668d6bf9bc-rhwsq" [e7a9e2d4-aaf4-4f77-a740-6149c1827e71] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:39:19.310853  786575 system_pods.go:61] "etcd-test-preload-493455" [ea875c73-63cc-4897-9f78-f11d1d058e2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:39:19.310861  786575 system_pods.go:61] "kube-apiserver-test-preload-493455" [df7a0da4-5798-4b72-8314-77f6aaacb6b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:39:19.310866  786575 system_pods.go:61] "kube-controller-manager-test-preload-493455" [768b1dac-ddb7-4387-9b23-7f65bf538ced] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:39:19.310872  786575 system_pods.go:61] "kube-proxy-hknnq" [3f774274-5790-4178-9271-42cdc552b8b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 11:39:19.310877  786575 system_pods.go:61] "kube-scheduler-test-preload-493455" [02abd477-636e-4daa-8276-e8fdcba0e067] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:39:19.310881  786575 system_pods.go:61] "storage-provisioner" [d2586d99-c7c4-4b31-baae-2f565245f60b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 11:39:19.310888  786575 system_pods.go:74] duration metric: took 9.633242ms to wait for pod list to return data ...
	I0908 11:39:19.310895  786575 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:39:19.315030  786575 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:39:19.315051  786575 node_conditions.go:123] node cpu capacity is 2
	I0908 11:39:19.315062  786575 node_conditions.go:105] duration metric: took 4.162861ms to run NodePressure ...
	I0908 11:39:19.315080  786575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:39:19.598455  786575 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 11:39:19.603147  786575 kubeadm.go:735] kubelet initialised
	I0908 11:39:19.603170  786575 kubeadm.go:736] duration metric: took 4.690644ms waiting for restarted kubelet to initialise ...
	I0908 11:39:19.603187  786575 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:39:19.628128  786575 ops.go:34] apiserver oom_adj: -16
	I0908 11:39:19.628150  786575 kubeadm.go:593] duration metric: took 8.797204041s to restartPrimaryControlPlane
	I0908 11:39:19.628160  786575 kubeadm.go:394] duration metric: took 8.851635443s to StartCluster
	I0908 11:39:19.628178  786575 settings.go:142] acquiring lock: {Name:mk18c67e9470bbfdfeaf7a5d3ce5d7a1813bc966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:39:19.628256  786575 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:39:19.628809  786575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:39:19.629033  786575 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:39:19.629114  786575 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 11:39:19.629223  786575 addons.go:69] Setting storage-provisioner=true in profile "test-preload-493455"
	I0908 11:39:19.629255  786575 addons.go:238] Setting addon storage-provisioner=true in "test-preload-493455"
	W0908 11:39:19.629268  786575 addons.go:247] addon storage-provisioner should already be in state true
	I0908 11:39:19.629269  786575 addons.go:69] Setting default-storageclass=true in profile "test-preload-493455"
	I0908 11:39:19.629293  786575 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-493455"
	I0908 11:39:19.629303  786575 host.go:66] Checking if "test-preload-493455" exists ...
	I0908 11:39:19.629344  786575 config.go:182] Loaded profile config "test-preload-493455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0908 11:39:19.629619  786575 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:39:19.629646  786575 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:39:19.629655  786575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:39:19.629688  786575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:39:19.631465  786575 out.go:179] * Verifying Kubernetes components...
	I0908 11:39:19.632746  786575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:39:19.645655  786575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0908 11:39:19.645756  786575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45533
	I0908 11:39:19.646285  786575 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:39:19.646359  786575 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:39:19.646853  786575 main.go:141] libmachine: Using API Version  1
	I0908 11:39:19.646881  786575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:39:19.646989  786575 main.go:141] libmachine: Using API Version  1
	I0908 11:39:19.647014  786575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:39:19.647277  786575 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:39:19.647390  786575 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:39:19.647564  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetState
	I0908 11:39:19.647799  786575 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:39:19.647850  786575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:39:19.650072  786575 kapi.go:59] client config for test-preload-493455: &rest.Config{Host:"https://192.168.39.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/client.crt", KeyFile:"/home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/client.key", CAFile:"/home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 11:39:19.650501  786575 addons.go:238] Setting addon default-storageclass=true in "test-preload-493455"
	W0908 11:39:19.650527  786575 addons.go:247] addon default-storageclass should already be in state true
	I0908 11:39:19.650558  786575 host.go:66] Checking if "test-preload-493455" exists ...
	I0908 11:39:19.651005  786575 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:39:19.651058  786575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:39:19.664269  786575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34365
	I0908 11:39:19.664839  786575 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:39:19.665341  786575 main.go:141] libmachine: Using API Version  1
	I0908 11:39:19.665368  786575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:39:19.665558  786575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I0908 11:39:19.665767  786575 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:39:19.665936  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetState
	I0908 11:39:19.666007  786575 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:39:19.666418  786575 main.go:141] libmachine: Using API Version  1
	I0908 11:39:19.666437  786575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:39:19.666786  786575 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:39:19.667351  786575 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:39:19.667401  786575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:39:19.667590  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:39:19.669466  786575 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:39:19.670712  786575 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:39:19.670736  786575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:39:19.670759  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:19.675404  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:19.675876  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:19.675922  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:19.676061  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:19.676234  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:19.676363  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:19.676518  786575 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/test-preload-493455/id_rsa Username:docker}
	I0908 11:39:19.686898  786575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46119
	I0908 11:39:19.687439  786575 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:39:19.687893  786575 main.go:141] libmachine: Using API Version  1
	I0908 11:39:19.687916  786575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:39:19.688317  786575 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:39:19.688522  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetState
	I0908 11:39:19.690274  786575 main.go:141] libmachine: (test-preload-493455) Calling .DriverName
	I0908 11:39:19.690475  786575 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:39:19.690489  786575 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:39:19.690508  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHHostname
	I0908 11:39:19.693060  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:19.693467  786575 main.go:141] libmachine: (test-preload-493455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:de:d9", ip: ""} in network mk-test-preload-493455: {Iface:virbr1 ExpiryTime:2025-09-08 12:38:56 +0000 UTC Type:0 Mac:52:54:00:eb:de:d9 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:test-preload-493455 Clientid:01:52:54:00:eb:de:d9}
	I0908 11:39:19.693534  786575 main.go:141] libmachine: (test-preload-493455) DBG | domain test-preload-493455 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:de:d9 in network mk-test-preload-493455
	I0908 11:39:19.693773  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHPort
	I0908 11:39:19.693935  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHKeyPath
	I0908 11:39:19.694062  786575 main.go:141] libmachine: (test-preload-493455) Calling .GetSSHUsername
	I0908 11:39:19.694180  786575 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/test-preload-493455/id_rsa Username:docker}
	I0908 11:39:19.862383  786575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:39:19.883553  786575 node_ready.go:35] waiting up to 6m0s for node "test-preload-493455" to be "Ready" ...
	I0908 11:39:19.886638  786575 node_ready.go:49] node "test-preload-493455" is "Ready"
	I0908 11:39:19.886667  786575 node_ready.go:38] duration metric: took 3.077979ms for node "test-preload-493455" to be "Ready" ...
	I0908 11:39:19.886684  786575 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:39:19.886743  786575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:39:19.904892  786575 api_server.go:72] duration metric: took 275.828948ms to wait for apiserver process to appear ...
	I0908 11:39:19.904924  786575 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:39:19.904948  786575 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0908 11:39:19.910807  786575 api_server.go:279] https://192.168.39.62:8443/healthz returned 200:
	ok
	I0908 11:39:19.911614  786575 api_server.go:141] control plane version: v1.32.0
	I0908 11:39:19.911635  786575 api_server.go:131] duration metric: took 6.703775ms to wait for apiserver health ...
	I0908 11:39:19.911646  786575 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:39:19.917245  786575 system_pods.go:59] 7 kube-system pods found
	I0908 11:39:19.917273  786575 system_pods.go:61] "coredns-668d6bf9bc-rhwsq" [e7a9e2d4-aaf4-4f77-a740-6149c1827e71] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:39:19.917283  786575 system_pods.go:61] "etcd-test-preload-493455" [ea875c73-63cc-4897-9f78-f11d1d058e2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:39:19.917331  786575 system_pods.go:61] "kube-apiserver-test-preload-493455" [df7a0da4-5798-4b72-8314-77f6aaacb6b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:39:19.917348  786575 system_pods.go:61] "kube-controller-manager-test-preload-493455" [768b1dac-ddb7-4387-9b23-7f65bf538ced] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:39:19.917363  786575 system_pods.go:61] "kube-proxy-hknnq" [3f774274-5790-4178-9271-42cdc552b8b7] Running
	I0908 11:39:19.917371  786575 system_pods.go:61] "kube-scheduler-test-preload-493455" [02abd477-636e-4daa-8276-e8fdcba0e067] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:39:19.917377  786575 system_pods.go:61] "storage-provisioner" [d2586d99-c7c4-4b31-baae-2f565245f60b] Running
	I0908 11:39:19.917385  786575 system_pods.go:74] duration metric: took 5.732983ms to wait for pod list to return data ...
	I0908 11:39:19.917395  786575 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:39:19.919732  786575 default_sa.go:45] found service account: "default"
	I0908 11:39:19.919746  786575 default_sa.go:55] duration metric: took 2.346085ms for default service account to be created ...
	I0908 11:39:19.919754  786575 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:39:19.923307  786575 system_pods.go:86] 7 kube-system pods found
	I0908 11:39:19.923334  786575 system_pods.go:89] "coredns-668d6bf9bc-rhwsq" [e7a9e2d4-aaf4-4f77-a740-6149c1827e71] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:39:19.923345  786575 system_pods.go:89] "etcd-test-preload-493455" [ea875c73-63cc-4897-9f78-f11d1d058e2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:39:19.923354  786575 system_pods.go:89] "kube-apiserver-test-preload-493455" [df7a0da4-5798-4b72-8314-77f6aaacb6b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:39:19.923377  786575 system_pods.go:89] "kube-controller-manager-test-preload-493455" [768b1dac-ddb7-4387-9b23-7f65bf538ced] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:39:19.923385  786575 system_pods.go:89] "kube-proxy-hknnq" [3f774274-5790-4178-9271-42cdc552b8b7] Running
	I0908 11:39:19.923393  786575 system_pods.go:89] "kube-scheduler-test-preload-493455" [02abd477-636e-4daa-8276-e8fdcba0e067] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:39:19.923403  786575 system_pods.go:89] "storage-provisioner" [d2586d99-c7c4-4b31-baae-2f565245f60b] Running
	I0908 11:39:19.923413  786575 system_pods.go:126] duration metric: took 3.652193ms to wait for k8s-apps to be running ...
	I0908 11:39:19.923425  786575 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:39:19.923471  786575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:39:19.940416  786575 system_svc.go:56] duration metric: took 16.985107ms WaitForService to wait for kubelet
	I0908 11:39:19.940443  786575 kubeadm.go:578] duration metric: took 311.384778ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:39:19.940465  786575 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:39:19.943413  786575 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:39:19.943443  786575 node_conditions.go:123] node cpu capacity is 2
	I0908 11:39:19.943457  786575 node_conditions.go:105] duration metric: took 2.987113ms to run NodePressure ...
	I0908 11:39:19.943471  786575 start.go:241] waiting for startup goroutines ...
	I0908 11:39:20.023987  786575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:39:20.040264  786575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:39:20.674439  786575 main.go:141] libmachine: Making call to close driver server
	I0908 11:39:20.674465  786575 main.go:141] libmachine: (test-preload-493455) Calling .Close
	I0908 11:39:20.674522  786575 main.go:141] libmachine: Making call to close driver server
	I0908 11:39:20.674540  786575 main.go:141] libmachine: (test-preload-493455) Calling .Close
	I0908 11:39:20.674768  786575 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:39:20.674785  786575 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:39:20.674796  786575 main.go:141] libmachine: Making call to close driver server
	I0908 11:39:20.674803  786575 main.go:141] libmachine: (test-preload-493455) Calling .Close
	I0908 11:39:20.674840  786575 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:39:20.674874  786575 main.go:141] libmachine: (test-preload-493455) DBG | Closing plugin on server side
	I0908 11:39:20.674902  786575 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:39:20.674918  786575 main.go:141] libmachine: Making call to close driver server
	I0908 11:39:20.674925  786575 main.go:141] libmachine: (test-preload-493455) Calling .Close
	I0908 11:39:20.675038  786575 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:39:20.675055  786575 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:39:20.675129  786575 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:39:20.675145  786575 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:39:20.675168  786575 main.go:141] libmachine: (test-preload-493455) DBG | Closing plugin on server side
	I0908 11:39:20.681558  786575 main.go:141] libmachine: Making call to close driver server
	I0908 11:39:20.681594  786575 main.go:141] libmachine: (test-preload-493455) Calling .Close
	I0908 11:39:20.681835  786575 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:39:20.681844  786575 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:39:20.684240  786575 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 11:39:20.685160  786575 addons.go:514] duration metric: took 1.056063094s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 11:39:20.685202  786575 start.go:246] waiting for cluster config update ...
	I0908 11:39:20.685221  786575 start.go:255] writing updated cluster config ...
	I0908 11:39:20.685476  786575 ssh_runner.go:195] Run: rm -f paused
	I0908 11:39:20.690900  786575 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:39:20.691312  786575 kapi.go:59] client config for test-preload-493455: &rest.Config{Host:"https://192.168.39.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/client.crt", KeyFile:"/home/jenkins/minikube-integration/21503-748170/.minikube/profiles/test-preload-493455/client.key", CAFile:"/home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 11:39:20.694531  786575 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-rhwsq" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 11:39:22.699879  786575 pod_ready.go:104] pod "coredns-668d6bf9bc-rhwsq" is not "Ready", error: <nil>
	W0908 11:39:24.701288  786575 pod_ready.go:104] pod "coredns-668d6bf9bc-rhwsq" is not "Ready", error: <nil>
	W0908 11:39:27.200490  786575 pod_ready.go:104] pod "coredns-668d6bf9bc-rhwsq" is not "Ready", error: <nil>
	W0908 11:39:29.702311  786575 pod_ready.go:104] pod "coredns-668d6bf9bc-rhwsq" is not "Ready", error: <nil>
	I0908 11:39:31.200725  786575 pod_ready.go:94] pod "coredns-668d6bf9bc-rhwsq" is "Ready"
	I0908 11:39:31.200751  786575 pod_ready.go:86] duration metric: took 10.506198707s for pod "coredns-668d6bf9bc-rhwsq" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:31.203055  786575 pod_ready.go:83] waiting for pod "etcd-test-preload-493455" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:32.212549  786575 pod_ready.go:94] pod "etcd-test-preload-493455" is "Ready"
	I0908 11:39:32.212586  786575 pod_ready.go:86] duration metric: took 1.009505749s for pod "etcd-test-preload-493455" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:32.215954  786575 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-493455" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:32.219739  786575 pod_ready.go:94] pod "kube-apiserver-test-preload-493455" is "Ready"
	I0908 11:39:32.219771  786575 pod_ready.go:86] duration metric: took 3.789551ms for pod "kube-apiserver-test-preload-493455" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:32.222692  786575 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-493455" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:33.228229  786575 pod_ready.go:94] pod "kube-controller-manager-test-preload-493455" is "Ready"
	I0908 11:39:33.228271  786575 pod_ready.go:86] duration metric: took 1.005559653s for pod "kube-controller-manager-test-preload-493455" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:33.230455  786575 pod_ready.go:83] waiting for pod "kube-proxy-hknnq" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:33.599308  786575 pod_ready.go:94] pod "kube-proxy-hknnq" is "Ready"
	I0908 11:39:33.599342  786575 pod_ready.go:86] duration metric: took 368.865523ms for pod "kube-proxy-hknnq" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:33.798619  786575 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-493455" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:34.198723  786575 pod_ready.go:94] pod "kube-scheduler-test-preload-493455" is "Ready"
	I0908 11:39:34.198753  786575 pod_ready.go:86] duration metric: took 400.09507ms for pod "kube-scheduler-test-preload-493455" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:39:34.198765  786575 pod_ready.go:40] duration metric: took 13.507842928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:39:34.239951  786575 start.go:617] kubectl: 1.33.2, cluster: 1.32.0 (minor skew: 1)
	I0908 11:39:34.241544  786575 out.go:179] * Done! kubectl is now configured to use "test-preload-493455" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.152557223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757331575152527136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f47dfe06-ef05-4c03-91ae-d0e69dca364c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.153177550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6634040-cf2d-43e3-a785-e47851c6990b name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.153249224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6634040-cf2d-43e3-a785-e47851c6990b name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.153394895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e280d63c65a674200f2c218763c0137af1364927c9d77fa80fbc342489ae916,PodSandboxId:936ce8ed5214fc1e04530985cae44b141d58a25789ef65c8f37ce7d051bcd522,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757331562243802517,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rhwsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a9e2d4-aaf4-4f77-a740-6149c1827e71,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57222ed361fd8ea6fa38406c01b99ea98dde0d9b92d0812d85e02ec3e1b31ee3,PodSandboxId:087360b6164fe9ccd8cbdfb1c8b5886cc98a5273b3af440ce77bc4b46c3bfc2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757331558751568151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hknnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3f774274-5790-4178-9271-42cdc552b8b7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89281e47cae61ef251d179198191e9fde82182c3f557115f455e962c4f33af1d,PodSandboxId:71e3ab73118c8e98a6890c8e95333b540d0dfe7f6f46a16b773e19ece2660fd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757331558637199659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2
586d99-c7c4-4b31-baae-2f565245f60b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdcd62ded69d25730a864c74694bf37a58b94eab0c3463fcbd51802f2371f3df,PodSandboxId:5a4caa1f6f3243633ad8d080d9850adca85096146ce50734e0c9ac72ed05d2bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757331554349176744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d3403b6e9896f846673d8729c3fbab2,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfed9e31bce5bbcc57d574af21cefee7c34d6036f753d54393efeec6f42cc7b8,PodSandboxId:19339eef71b6259e1c054ec6a2f77933f743d0578ffae0c654b17887c6dc136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757331554375003223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66533d5ed8f464277dd877
2cb417bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995edbd6f3b435595b98b6612247948159a964bc885260a74b4b799a50abd3f8,PodSandboxId:ee18c6c9cca52d38d6caac71eba46f96751944944d16f50363f58fb3eaa53be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757331554311480803,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff2f712fd259a057c5dbf1bd06103bd9,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0bc293d08c365cfff511449e05a4e546817d41c2115a18fd3ebe2c4fb827484,PodSandboxId:f6eb463d2d17f8ae90ecbefeab6262d0a8f80f7c74d7cf9605b2a8025641299b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757331554320254921,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ef3d081d24f7654d4eaad277192a10,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6634040-cf2d-43e3-a785-e47851c6990b name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.193367977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dfe5ccc7-19dd-4c03-a435-48897f801703 name=/runtime.v1.RuntimeService/Version
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.193435477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dfe5ccc7-19dd-4c03-a435-48897f801703 name=/runtime.v1.RuntimeService/Version
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.195424759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca81be28-e2f4-470c-922c-b0ad4160a613 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.195927324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757331575195902422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca81be28-e2f4-470c-922c-b0ad4160a613 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.196475258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4533da01-90f1-4973-af95-f63d6ee3e199 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.196536674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4533da01-90f1-4973-af95-f63d6ee3e199 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.197144650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e280d63c65a674200f2c218763c0137af1364927c9d77fa80fbc342489ae916,PodSandboxId:936ce8ed5214fc1e04530985cae44b141d58a25789ef65c8f37ce7d051bcd522,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757331562243802517,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rhwsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a9e2d4-aaf4-4f77-a740-6149c1827e71,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57222ed361fd8ea6fa38406c01b99ea98dde0d9b92d0812d85e02ec3e1b31ee3,PodSandboxId:087360b6164fe9ccd8cbdfb1c8b5886cc98a5273b3af440ce77bc4b46c3bfc2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757331558751568151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hknnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3f774274-5790-4178-9271-42cdc552b8b7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89281e47cae61ef251d179198191e9fde82182c3f557115f455e962c4f33af1d,PodSandboxId:71e3ab73118c8e98a6890c8e95333b540d0dfe7f6f46a16b773e19ece2660fd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757331558637199659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2
586d99-c7c4-4b31-baae-2f565245f60b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdcd62ded69d25730a864c74694bf37a58b94eab0c3463fcbd51802f2371f3df,PodSandboxId:5a4caa1f6f3243633ad8d080d9850adca85096146ce50734e0c9ac72ed05d2bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757331554349176744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d3403b6e9896f846673d8729c3fbab2,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfed9e31bce5bbcc57d574af21cefee7c34d6036f753d54393efeec6f42cc7b8,PodSandboxId:19339eef71b6259e1c054ec6a2f77933f743d0578ffae0c654b17887c6dc136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757331554375003223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66533d5ed8f464277dd877
2cb417bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995edbd6f3b435595b98b6612247948159a964bc885260a74b4b799a50abd3f8,PodSandboxId:ee18c6c9cca52d38d6caac71eba46f96751944944d16f50363f58fb3eaa53be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757331554311480803,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff2f712fd259a057c5dbf1bd06103bd9,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0bc293d08c365cfff511449e05a4e546817d41c2115a18fd3ebe2c4fb827484,PodSandboxId:f6eb463d2d17f8ae90ecbefeab6262d0a8f80f7c74d7cf9605b2a8025641299b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757331554320254921,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ef3d081d24f7654d4eaad277192a10,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4533da01-90f1-4973-af95-f63d6ee3e199 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.235734285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14d79b86-a5a1-44f4-a15d-9bab7c854824 name=/runtime.v1.RuntimeService/Version
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.235879765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14d79b86-a5a1-44f4-a15d-9bab7c854824 name=/runtime.v1.RuntimeService/Version
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.237434035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=baad541e-2640-45a6-b195-3349d1dc882d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.237932523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757331575237809069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=baad541e-2640-45a6-b195-3349d1dc882d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.238452717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59bd9dff-dd85-42df-bb47-07d2d2ea8985 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.238499285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59bd9dff-dd85-42df-bb47-07d2d2ea8985 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.238654104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e280d63c65a674200f2c218763c0137af1364927c9d77fa80fbc342489ae916,PodSandboxId:936ce8ed5214fc1e04530985cae44b141d58a25789ef65c8f37ce7d051bcd522,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757331562243802517,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rhwsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a9e2d4-aaf4-4f77-a740-6149c1827e71,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57222ed361fd8ea6fa38406c01b99ea98dde0d9b92d0812d85e02ec3e1b31ee3,PodSandboxId:087360b6164fe9ccd8cbdfb1c8b5886cc98a5273b3af440ce77bc4b46c3bfc2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757331558751568151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hknnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3f774274-5790-4178-9271-42cdc552b8b7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89281e47cae61ef251d179198191e9fde82182c3f557115f455e962c4f33af1d,PodSandboxId:71e3ab73118c8e98a6890c8e95333b540d0dfe7f6f46a16b773e19ece2660fd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757331558637199659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2
586d99-c7c4-4b31-baae-2f565245f60b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdcd62ded69d25730a864c74694bf37a58b94eab0c3463fcbd51802f2371f3df,PodSandboxId:5a4caa1f6f3243633ad8d080d9850adca85096146ce50734e0c9ac72ed05d2bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757331554349176744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d3403b6e9896f846673d8729c3fbab2,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfed9e31bce5bbcc57d574af21cefee7c34d6036f753d54393efeec6f42cc7b8,PodSandboxId:19339eef71b6259e1c054ec6a2f77933f743d0578ffae0c654b17887c6dc136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757331554375003223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66533d5ed8f464277dd877
2cb417bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995edbd6f3b435595b98b6612247948159a964bc885260a74b4b799a50abd3f8,PodSandboxId:ee18c6c9cca52d38d6caac71eba46f96751944944d16f50363f58fb3eaa53be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757331554311480803,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff2f712fd259a057c5dbf1bd06103bd9,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0bc293d08c365cfff511449e05a4e546817d41c2115a18fd3ebe2c4fb827484,PodSandboxId:f6eb463d2d17f8ae90ecbefeab6262d0a8f80f7c74d7cf9605b2a8025641299b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757331554320254921,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ef3d081d24f7654d4eaad277192a10,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59bd9dff-dd85-42df-bb47-07d2d2ea8985 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.273026272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e07c9d8b-722e-44ed-bf0d-aa40b3072e15 name=/runtime.v1.RuntimeService/Version
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.273090214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e07c9d8b-722e-44ed-bf0d-aa40b3072e15 name=/runtime.v1.RuntimeService/Version
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.274314395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d3dc4eb-a329-4792-a173-ceed212a70c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.274748013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757331575274725511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d3dc4eb-a329-4792-a173-ceed212a70c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.275325532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73f775cc-b7fa-4bb3-91f4-e6bc7fcd1649 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.275585525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73f775cc-b7fa-4bb3-91f4-e6bc7fcd1649 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 11:39:35 test-preload-493455 crio[834]: time="2025-09-08 11:39:35.276174386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e280d63c65a674200f2c218763c0137af1364927c9d77fa80fbc342489ae916,PodSandboxId:936ce8ed5214fc1e04530985cae44b141d58a25789ef65c8f37ce7d051bcd522,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757331562243802517,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rhwsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7a9e2d4-aaf4-4f77-a740-6149c1827e71,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57222ed361fd8ea6fa38406c01b99ea98dde0d9b92d0812d85e02ec3e1b31ee3,PodSandboxId:087360b6164fe9ccd8cbdfb1c8b5886cc98a5273b3af440ce77bc4b46c3bfc2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757331558751568151,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hknnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3f774274-5790-4178-9271-42cdc552b8b7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89281e47cae61ef251d179198191e9fde82182c3f557115f455e962c4f33af1d,PodSandboxId:71e3ab73118c8e98a6890c8e95333b540d0dfe7f6f46a16b773e19ece2660fd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757331558637199659,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2
586d99-c7c4-4b31-baae-2f565245f60b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdcd62ded69d25730a864c74694bf37a58b94eab0c3463fcbd51802f2371f3df,PodSandboxId:5a4caa1f6f3243633ad8d080d9850adca85096146ce50734e0c9ac72ed05d2bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757331554349176744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d3403b6e9896f846673d8729c3fbab2,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfed9e31bce5bbcc57d574af21cefee7c34d6036f753d54393efeec6f42cc7b8,PodSandboxId:19339eef71b6259e1c054ec6a2f77933f743d0578ffae0c654b17887c6dc136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757331554375003223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66533d5ed8f464277dd877
2cb417bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995edbd6f3b435595b98b6612247948159a964bc885260a74b4b799a50abd3f8,PodSandboxId:ee18c6c9cca52d38d6caac71eba46f96751944944d16f50363f58fb3eaa53be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757331554311480803,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff2f712fd259a057c5dbf1bd06103bd9,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0bc293d08c365cfff511449e05a4e546817d41c2115a18fd3ebe2c4fb827484,PodSandboxId:f6eb463d2d17f8ae90ecbefeab6262d0a8f80f7c74d7cf9605b2a8025641299b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757331554320254921,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-493455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ef3d081d24f7654d4eaad277192a10,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73f775cc-b7fa-4bb3-91f4-e6bc7fcd1649 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8e280d63c65a6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Running             coredns                   1                   936ce8ed5214f       coredns-668d6bf9bc-rhwsq
	57222ed361fd8       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   087360b6164fe       kube-proxy-hknnq
	89281e47cae61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   71e3ab73118c8       storage-provisioner
	dfed9e31bce5b       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   19339eef71b62       kube-controller-manager-test-preload-493455
	bdcd62ded69d2       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago      Running             etcd                      1                   5a4caa1f6f324       etcd-test-preload-493455
	e0bc293d08c36       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   21 seconds ago      Running             kube-scheduler            1                   f6eb463d2d17f       kube-scheduler-test-preload-493455
	995edbd6f3b43       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   21 seconds ago      Running             kube-apiserver            1                   ee18c6c9cca52       kube-apiserver-test-preload-493455
	
	
	==> coredns [8e280d63c65a674200f2c218763c0137af1364927c9d77fa80fbc342489ae916] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46125 - 34373 "HINFO IN 1555636609218975525.6393478158360354975. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013377896s
	
	
	==> describe nodes <==
	Name:               test-preload-493455
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-493455
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2
	                    minikube.k8s.io/name=test-preload-493455
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_38_05_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:38:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-493455
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:39:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:39:19 +0000   Mon, 08 Sep 2025 11:37:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:39:19 +0000   Mon, 08 Sep 2025 11:37:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:39:19 +0000   Mon, 08 Sep 2025 11:37:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:39:19 +0000   Mon, 08 Sep 2025 11:39:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    test-preload-493455
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 1208aec478c24c2fbdea2dea8480ba60
	  System UUID:                1208aec4-78c2-4c2f-bdea-2dea8480ba60
	  Boot ID:                    56abad37-3e1c-4dba-ae9d-8b87b84c4e52
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-rhwsq                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     86s
	  kube-system                 etcd-test-preload-493455                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         91s
	  kube-system                 kube-apiserver-test-preload-493455             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-test-preload-493455    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-hknnq                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-test-preload-493455             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16s                kube-proxy       
	  Normal   Starting                 85s                kube-proxy       
	  Normal   NodeHasSufficientPID     91s                kubelet          Node test-preload-493455 status is now: NodeHasSufficientPID
	  Normal   Starting                 91s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  91s                kubelet          Node test-preload-493455 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s                kubelet          Node test-preload-493455 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                90s                kubelet          Node test-preload-493455 status is now: NodeReady
	  Normal   RegisteredNode           87s                node-controller  Node test-preload-493455 event: Registered Node test-preload-493455 in Controller
	  Normal   CIDRAssignmentFailed     87s                cidrAllocator    Node test-preload-493455 status is now: CIDRAssignmentFailed
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-493455 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-493455 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-493455 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-493455 has been rebooted, boot id: 56abad37-3e1c-4dba-ae9d-8b87b84c4e52
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-493455 event: Registered Node test-preload-493455 in Controller
	
	
	==> dmesg <==
	[Sep 8 11:38] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002262] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.021115] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 8 11:39] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.093923] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.511987] kauditd_printk_skb: 177 callbacks suppressed
	[  +5.500444] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [bdcd62ded69d25730a864c74694bf37a58b94eab0c3463fcbd51802f2371f3df] <==
	{"level":"info","ts":"2025-09-08T11:39:14.833006Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T11:39:14.834888Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T11:39:14.834928Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T11:39:14.833203Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-08T11:39:14.842434Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T11:39:14.842523Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.62:2380"}
	{"level":"info","ts":"2025-09-08T11:39:14.850425Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.62:2380"}
	{"level":"info","ts":"2025-09-08T11:39:14.851067Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"4cff10f3f970b356","initial-advertise-peer-urls":["https://192.168.39.62:2380"],"listen-peer-urls":["https://192.168.39.62:2380"],"advertise-client-urls":["https://192.168.39.62:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.62:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-08T11:39:14.851163Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-08T11:39:16.397944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-08T11:39:16.397991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-08T11:39:16.398010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 received MsgPreVoteResp from 4cff10f3f970b356 at term 2"}
	{"level":"info","ts":"2025-09-08T11:39:16.398022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became candidate at term 3"}
	{"level":"info","ts":"2025-09-08T11:39:16.398028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 received MsgVoteResp from 4cff10f3f970b356 at term 3"}
	{"level":"info","ts":"2025-09-08T11:39:16.398036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became leader at term 3"}
	{"level":"info","ts":"2025-09-08T11:39:16.398045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4cff10f3f970b356 elected leader 4cff10f3f970b356 at term 3"}
	{"level":"info","ts":"2025-09-08T11:39:16.400896Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"4cff10f3f970b356","local-member-attributes":"{Name:test-preload-493455 ClientURLs:[https://192.168.39.62:2379]}","request-path":"/0/members/4cff10f3f970b356/attributes","cluster-id":"cebe0b560c7f0a8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-08T11:39:16.400977Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T11:39:16.401516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T11:39:16.401627Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-08T11:39:16.401668Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-08T11:39:16.402395Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-08T11:39:16.402418Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-08T11:39:16.403261Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-08T11:39:16.403413Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.62:2379"}
	
	
	==> kernel <==
	 11:39:35 up 0 min,  0 users,  load average: 0.58, 0.17, 0.06
	Linux test-preload-493455 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [995edbd6f3b435595b98b6612247948159a964bc885260a74b4b799a50abd3f8] <==
	I0908 11:39:17.541994       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0908 11:39:17.545556       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0908 11:39:17.546572       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0908 11:39:17.547286       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0908 11:39:17.547447       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0908 11:39:17.553080       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0908 11:39:17.553315       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0908 11:39:17.553402       1 aggregator.go:171] initial CRD sync complete...
	I0908 11:39:17.553426       1 autoregister_controller.go:144] Starting autoregister controller
	I0908 11:39:17.553432       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0908 11:39:17.553437       1 cache.go:39] Caches are synced for autoregister controller
	E0908 11:39:17.569497       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0908 11:39:17.590748       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0908 11:39:17.603188       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0908 11:39:17.603249       1 policy_source.go:240] refreshing policies
	I0908 11:39:17.660977       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 11:39:18.176597       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0908 11:39:18.446641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 11:39:19.397222       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0908 11:39:19.432739       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0908 11:39:19.463803       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 11:39:19.477457       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 11:39:20.871078       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 11:39:21.071424       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0908 11:39:21.171712       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [dfed9e31bce5bbcc57d574af21cefee7c34d6036f753d54393efeec6f42cc7b8] <==
	I0908 11:39:20.788596       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0908 11:39:20.791877       1 shared_informer.go:320] Caches are synced for cronjob
	I0908 11:39:20.795147       1 shared_informer.go:320] Caches are synced for daemon sets
	I0908 11:39:20.795205       1 shared_informer.go:320] Caches are synced for garbage collector
	I0908 11:39:20.795217       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:39:20.795222       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:39:20.797262       1 shared_informer.go:320] Caches are synced for stateful set
	I0908 11:39:20.802996       1 shared_informer.go:320] Caches are synced for taint
	I0908 11:39:20.803290       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 11:39:20.803377       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-493455"
	I0908 11:39:20.803438       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 11:39:20.807644       1 shared_informer.go:320] Caches are synced for attach detach
	I0908 11:39:20.814337       1 shared_informer.go:320] Caches are synced for endpoint
	I0908 11:39:20.818424       1 shared_informer.go:320] Caches are synced for expand
	I0908 11:39:20.820475       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0908 11:39:20.821602       1 shared_informer.go:320] Caches are synced for garbage collector
	I0908 11:39:20.822764       1 shared_informer.go:320] Caches are synced for PVC protection
	I0908 11:39:20.826045       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0908 11:39:20.826107       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0908 11:39:20.828321       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0908 11:39:21.077076       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="308.504349ms"
	I0908 11:39:21.078373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="85.972µs"
	I0908 11:39:23.319650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="44.491µs"
	I0908 11:39:30.893299       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.099436ms"
	I0908 11:39:30.893678       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="82.943µs"
	
	
	==> kube-proxy [57222ed361fd8ea6fa38406c01b99ea98dde0d9b92d0812d85e02ec3e1b31ee3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0908 11:39:18.983604       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0908 11:39:18.992564       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.62"]
	E0908 11:39:18.992676       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:39:19.026711       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0908 11:39:19.026744       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 11:39:19.026762       1 server_linux.go:170] "Using iptables Proxier"
	I0908 11:39:19.029468       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:39:19.029725       1 server.go:497] "Version info" version="v1.32.0"
	I0908 11:39:19.029759       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:39:19.031372       1 config.go:199] "Starting service config controller"
	I0908 11:39:19.031409       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0908 11:39:19.031435       1 config.go:105] "Starting endpoint slice config controller"
	I0908 11:39:19.031439       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0908 11:39:19.032185       1 config.go:329] "Starting node config controller"
	I0908 11:39:19.032269       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0908 11:39:19.131739       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0908 11:39:19.131786       1 shared_informer.go:320] Caches are synced for service config
	I0908 11:39:19.132996       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e0bc293d08c365cfff511449e05a4e546817d41c2115a18fd3ebe2c4fb827484] <==
	I0908 11:39:15.701110       1 serving.go:386] Generated self-signed cert in-memory
	W0908 11:39:17.504718       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 11:39:17.504789       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 11:39:17.504800       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 11:39:17.504811       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 11:39:17.567327       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0908 11:39:17.567391       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:39:17.571596       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:39:17.571666       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0908 11:39:17.573260       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0908 11:39:17.573360       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:39:17.671872       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 08 11:39:17 test-preload-493455 kubelet[1154]: I0908 11:39:17.679395    1154 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-493455"
	Sep 08 11:39:17 test-preload-493455 kubelet[1154]: E0908 11:39:17.688805    1154 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-493455\" already exists" pod="kube-system/etcd-test-preload-493455"
	Sep 08 11:39:17 test-preload-493455 kubelet[1154]: I0908 11:39:17.688881    1154 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-493455"
	Sep 08 11:39:17 test-preload-493455 kubelet[1154]: E0908 11:39:17.696465    1154 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-493455\" already exists" pod="kube-system/kube-apiserver-test-preload-493455"
	Sep 08 11:39:17 test-preload-493455 kubelet[1154]: I0908 11:39:17.696502    1154 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-493455"
	Sep 08 11:39:17 test-preload-493455 kubelet[1154]: E0908 11:39:17.708691    1154 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-493455\" already exists" pod="kube-system/kube-controller-manager-test-preload-493455"
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: I0908 11:39:18.121988    1154 apiserver.go:52] "Watching apiserver"
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: E0908 11:39:18.129931    1154 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-rhwsq" podUID="e7a9e2d4-aaf4-4f77-a740-6149c1827e71"
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: I0908 11:39:18.150218    1154 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: I0908 11:39:18.166380    1154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d2586d99-c7c4-4b31-baae-2f565245f60b-tmp\") pod \"storage-provisioner\" (UID: \"d2586d99-c7c4-4b31-baae-2f565245f60b\") " pod="kube-system/storage-provisioner"
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: I0908 11:39:18.166436    1154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f774274-5790-4178-9271-42cdc552b8b7-xtables-lock\") pod \"kube-proxy-hknnq\" (UID: \"3f774274-5790-4178-9271-42cdc552b8b7\") " pod="kube-system/kube-proxy-hknnq"
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: I0908 11:39:18.166475    1154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f774274-5790-4178-9271-42cdc552b8b7-lib-modules\") pod \"kube-proxy-hknnq\" (UID: \"3f774274-5790-4178-9271-42cdc552b8b7\") " pod="kube-system/kube-proxy-hknnq"
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: E0908 11:39:18.167326    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: E0908 11:39:18.167650    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7a9e2d4-aaf4-4f77-a740-6149c1827e71-config-volume podName:e7a9e2d4-aaf4-4f77-a740-6149c1827e71 nodeName:}" failed. No retries permitted until 2025-09-08 11:39:18.667585238 +0000 UTC m=+6.648083689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e7a9e2d4-aaf4-4f77-a740-6149c1827e71-config-volume") pod "coredns-668d6bf9bc-rhwsq" (UID: "e7a9e2d4-aaf4-4f77-a740-6149c1827e71") : object "kube-system"/"coredns" not registered
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: E0908 11:39:18.671999    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 11:39:18 test-preload-493455 kubelet[1154]: E0908 11:39:18.672071    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7a9e2d4-aaf4-4f77-a740-6149c1827e71-config-volume podName:e7a9e2d4-aaf4-4f77-a740-6149c1827e71 nodeName:}" failed. No retries permitted until 2025-09-08 11:39:19.672057607 +0000 UTC m=+7.652556069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e7a9e2d4-aaf4-4f77-a740-6149c1827e71-config-volume") pod "coredns-668d6bf9bc-rhwsq" (UID: "e7a9e2d4-aaf4-4f77-a740-6149c1827e71") : object "kube-system"/"coredns" not registered
	Sep 08 11:39:19 test-preload-493455 kubelet[1154]: I0908 11:39:19.465304    1154 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Sep 08 11:39:19 test-preload-493455 kubelet[1154]: E0908 11:39:19.694278    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 11:39:19 test-preload-493455 kubelet[1154]: E0908 11:39:19.694362    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7a9e2d4-aaf4-4f77-a740-6149c1827e71-config-volume podName:e7a9e2d4-aaf4-4f77-a740-6149c1827e71 nodeName:}" failed. No retries permitted until 2025-09-08 11:39:21.694345558 +0000 UTC m=+9.674844012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e7a9e2d4-aaf4-4f77-a740-6149c1827e71-config-volume") pod "coredns-668d6bf9bc-rhwsq" (UID: "e7a9e2d4-aaf4-4f77-a740-6149c1827e71") : object "kube-system"/"coredns" not registered
	Sep 08 11:39:22 test-preload-493455 kubelet[1154]: E0908 11:39:22.207974    1154 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757331562207656310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 08 11:39:22 test-preload-493455 kubelet[1154]: E0908 11:39:22.207994    1154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757331562207656310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 08 11:39:24 test-preload-493455 kubelet[1154]: I0908 11:39:24.307061    1154 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 08 11:39:30 test-preload-493455 kubelet[1154]: I0908 11:39:30.865586    1154 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 08 11:39:32 test-preload-493455 kubelet[1154]: E0908 11:39:32.211188    1154 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757331572209721380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 08 11:39:32 test-preload-493455 kubelet[1154]: E0908 11:39:32.211228    1154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757331572209721380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [89281e47cae61ef251d179198191e9fde82182c3f557115f455e962c4f33af1d] <==
	I0908 11:39:18.855485       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-493455 -n test-preload-493455
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-493455 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-493455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-493455
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-493455: (1.136017389s)
--- FAIL: TestPreload (149.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (51.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903924 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-903924 --driver=kvm2  --container-runtime=crio: signal: killed (47.954423902s)

                                                
                                                
-- stdout --
	* [NoKubernetes-903924] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-903924

                                                
                                                
-- /stdout --
no_kubernetes_test.go:195: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-903924 --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-903924 -n NoKubernetes-903924
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-903924 -n NoKubernetes-903924: exit status 3 (3.238790955s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 11:46:39.277613  794851 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E0908 11:46:39.277637  794851 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 3 (may be ok)
helpers_test.go:249: "NoKubernetes-903924" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (51.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h5hcp" [d20477db-7399-4b1f-ad64-6cfa0fb34d60] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 11:58:47.304799  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:48.424561  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:52.584472  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:52.931566  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:52.937984  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:52.949313  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:52.970725  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:53.012122  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:53.093597  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:53.255252  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:53.527020  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:53.577567  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:54.219632  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:55.501666  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:56.712484  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:58.063619  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:59.432129  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:01.391079  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/old-k8s-version-073517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:03.185960  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:13.427699  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:21.232012  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:23.368747  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:27.135425  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:33.546200  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:33.909033  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:42.352449  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/old-k8s-version-073517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:42.604184  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:42.610600  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:42.621990  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:42.643361  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:42.684865  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:42.766461  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:42.928023  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:43.249904  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:43.892172  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:45.174276  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:47.735778  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:52.857112  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:00:03.099243  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:00:09.226883  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:00:14.871156  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:00:23.581604  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:00:55.468074  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:01:04.274369  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/old-k8s-version-073517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:01:04.543777  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:01:04.563220  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:01:32.266552  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:01:36.792846  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:01:39.509528  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:02:07.210308  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:02:25.364199  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:02:26.465288  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:02:53.068690  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:11.608214  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:20.415639  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/old-k8s-version-073517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:30.882241  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:39.310126  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:48.116385  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/old-k8s-version-073517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:52.931961  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:53.527355  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:56.712301  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:59.431786  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:04:20.634414  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:04:42.603462  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:10.307566  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:19.787110  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:04.563528  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:33.961091  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:39.509274  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:07:25.364584  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-08 12:07:43.564623033 +0000 UTC m=+5915.527026206
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 describe po kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-149795 describe po kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-h5hcp
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-149795/192.168.39.109
Start Time:       Mon, 08 Sep 2025 11:58:31 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-msxnq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-msxnq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp to default-k8s-diff-port-149795
Warning  Failed     8m30s                  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m8s                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m14s (x5 over 9m11s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m25s (x5 over 8m30s)  kubelet            Error: ErrImagePull
Warning  Failed     2m25s (x3 over 7m7s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     80s (x16 over 8m29s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    15s (x21 over 8m29s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 logs kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-149795 logs kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard: exit status 1 (73.172745ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-h5hcp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-149795 logs kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-149795 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-149795 logs -n 25: (1.305685245s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ start   │ -p newest-cni-549052 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:57 UTC │
	│ addons  │ enable dashboard -p no-preload-474007 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ start   │ -p no-preload-474007 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-256792 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ start   │ -p embed-certs-256792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:57 UTC │
	│ addons  │ enable metrics-server -p newest-cni-549052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:57 UTC │
	│ stop    │ -p newest-cni-549052 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-549052 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:57 UTC │
	│ start   │ -p newest-cni-549052 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-149795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-149795 │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:57 UTC │
	│ start   │ -p default-k8s-diff-port-149795 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-149795 │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:58 UTC │
	│ image   │ embed-certs-256792 image list --format=json                                                                                                                                                                                                 │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ pause   │ -p embed-certs-256792 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ unpause │ -p embed-certs-256792 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ image   │ newest-cni-549052 image list --format=json                                                                                                                                                                                                  │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ pause   │ -p newest-cni-549052 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p embed-certs-256792                                                                                                                                                                                                                       │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p embed-certs-256792                                                                                                                                                                                                                       │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ image   │ no-preload-474007 image list --format=json                                                                                                                                                                                                  │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ pause   │ -p no-preload-474007 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p newest-cni-549052                                                                                                                                                                                                                        │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ unpause │ -p no-preload-474007 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p newest-cni-549052                                                                                                                                                                                                                        │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p no-preload-474007                                                                                                                                                                                                                        │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p no-preload-474007                                                                                                                                                                                                                        │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:57:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:57:42.898398  812547 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:57:42.898550  812547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:57:42.898562  812547 out.go:374] Setting ErrFile to fd 2...
	I0908 11:57:42.898566  812547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:57:42.898823  812547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 11:57:42.899482  812547 out.go:368] Setting JSON to false
	I0908 11:57:42.900654  812547 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":74379,"bootTime":1757258284,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:57:42.900724  812547 start.go:140] virtualization: kvm guest
	I0908 11:57:42.902501  812547 out.go:179] * [default-k8s-diff-port-149795] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:57:42.903989  812547 notify.go:220] Checking for updates...
	I0908 11:57:42.903996  812547 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 11:57:42.906751  812547 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:57:42.908054  812547 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:57:42.909127  812547 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 11:57:42.910157  812547 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:57:42.911116  812547 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:57:42.912696  812547 config.go:182] Loaded profile config "default-k8s-diff-port-149795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:57:42.913410  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:57:42.913483  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:57:42.929818  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I0908 11:57:42.930451  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:57:42.931128  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:57:42.931169  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:57:42.931600  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:57:42.931872  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:57:42.932131  812547 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:57:42.932474  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:57:42.932533  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:57:42.948994  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41571
	I0908 11:57:42.949488  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:57:42.950108  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:57:42.950138  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:57:42.950472  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:57:42.950690  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:57:42.990429  812547 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 11:57:42.991742  812547 start.go:304] selected driver: kvm2
	I0908 11:57:42.991765  812547 start.go:918] validating driver "kvm2" against &{Name:default-k8s-diff-port-149795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:57:42.991903  812547 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:57:42.992936  812547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:57:42.993033  812547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21503-748170/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 11:57:43.010450  812547 install.go:137] /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 11:57:43.010937  812547 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:57:43.010979  812547 cni.go:84] Creating CNI manager for ""
	I0908 11:57:43.011021  812547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:57:43.011075  812547 start.go:348] cluster config:
	{Name:default-k8s-diff-port-149795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:57:43.011196  812547 iso.go:125] acquiring lock: {Name:mk013a3bcd14eba8870ec8e08630600588ab11c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:57:43.012784  812547 out.go:179] * Starting "default-k8s-diff-port-149795" primary control-plane node in "default-k8s-diff-port-149795" cluster
	I0908 11:57:40.491914  811458 node_ready.go:49] node "no-preload-474007" is "Ready"
	I0908 11:57:40.491945  811458 node_ready.go:38] duration metric: took 6.509479549s for node "no-preload-474007" to be "Ready" ...
	I0908 11:57:40.491961  811458 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:57:40.492011  811458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:57:40.518972  811458 api_server.go:72] duration metric: took 6.856993983s to wait for apiserver process to appear ...
	I0908 11:57:40.519007  811458 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:57:40.519036  811458 api_server.go:253] Checking apiserver healthz at https://192.168.61.59:8443/healthz ...
	I0908 11:57:40.526000  811458 api_server.go:279] https://192.168.61.59:8443/healthz returned 200:
	ok
	I0908 11:57:40.527220  811458 api_server.go:141] control plane version: v1.34.0
	I0908 11:57:40.527247  811458 api_server.go:131] duration metric: took 8.230769ms to wait for apiserver health ...
	I0908 11:57:40.527258  811458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:57:40.532690  811458 system_pods.go:59] 8 kube-system pods found
	I0908 11:57:40.532722  811458 system_pods.go:61] "coredns-66bc5c9577-nvjls" [1b079ef7-d1a6-4e01-a88c-b5c7fa725797] Running
	I0908 11:57:40.532734  811458 system_pods.go:61] "etcd-no-preload-474007" [8fd2fdfc-a6e2-4ec0-a61d-04bd593db882] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:57:40.532738  811458 system_pods.go:61] "kube-apiserver-no-preload-474007" [948963be-9734-4dec-b2aa-e97e0f7722e3] Running
	I0908 11:57:40.532744  811458 system_pods.go:61] "kube-controller-manager-no-preload-474007" [4a53f493-ad8e-42b8-bbb3-ce0b26bd5985] Running
	I0908 11:57:40.532748  811458 system_pods.go:61] "kube-proxy-9fljr" [63bf4b52-6670-4c76-af05-863f9e5f233e] Running
	I0908 11:57:40.532751  811458 system_pods.go:61] "kube-scheduler-no-preload-474007" [9c847320-1276-44a9-a435-f0b4e0939801] Running
	I0908 11:57:40.532757  811458 system_pods.go:61] "metrics-server-746fcd58dc-bbz2v" [a9b335ae-0a9f-4124-9a90-bf148a7580ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:57:40.532760  811458 system_pods.go:61] "storage-provisioner" [5ef0a874-a428-461f-8a06-9729c469a4b4] Running
	I0908 11:57:40.532767  811458 system_pods.go:74] duration metric: took 5.502732ms to wait for pod list to return data ...
	I0908 11:57:40.532778  811458 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:57:40.536081  811458 default_sa.go:45] found service account: "default"
	I0908 11:57:40.536103  811458 default_sa.go:55] duration metric: took 3.319566ms for default service account to be created ...
	I0908 11:57:40.536111  811458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:57:40.538802  811458 system_pods.go:86] 8 kube-system pods found
	I0908 11:57:40.538827  811458 system_pods.go:89] "coredns-66bc5c9577-nvjls" [1b079ef7-d1a6-4e01-a88c-b5c7fa725797] Running
	I0908 11:57:40.538840  811458 system_pods.go:89] "etcd-no-preload-474007" [8fd2fdfc-a6e2-4ec0-a61d-04bd593db882] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:57:40.538848  811458 system_pods.go:89] "kube-apiserver-no-preload-474007" [948963be-9734-4dec-b2aa-e97e0f7722e3] Running
	I0908 11:57:40.538859  811458 system_pods.go:89] "kube-controller-manager-no-preload-474007" [4a53f493-ad8e-42b8-bbb3-ce0b26bd5985] Running
	I0908 11:57:40.538864  811458 system_pods.go:89] "kube-proxy-9fljr" [63bf4b52-6670-4c76-af05-863f9e5f233e] Running
	I0908 11:57:40.538869  811458 system_pods.go:89] "kube-scheduler-no-preload-474007" [9c847320-1276-44a9-a435-f0b4e0939801] Running
	I0908 11:57:40.538878  811458 system_pods.go:89] "metrics-server-746fcd58dc-bbz2v" [a9b335ae-0a9f-4124-9a90-bf148a7580ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:57:40.538886  811458 system_pods.go:89] "storage-provisioner" [5ef0a874-a428-461f-8a06-9729c469a4b4] Running
	I0908 11:57:40.538898  811458 system_pods.go:126] duration metric: took 2.779097ms to wait for k8s-apps to be running ...
	I0908 11:57:40.538912  811458 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:57:40.538969  811458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:57:40.557715  811458 system_svc.go:56] duration metric: took 18.796502ms WaitForService to wait for kubelet
	I0908 11:57:40.557743  811458 kubeadm.go:578] duration metric: took 6.895770979s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:57:40.557769  811458 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:57:40.563199  811458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:57:40.563230  811458 node_conditions.go:123] node cpu capacity is 2
	I0908 11:57:40.563245  811458 node_conditions.go:105] duration metric: took 5.46967ms to run NodePressure ...
	I0908 11:57:40.563261  811458 start.go:241] waiting for startup goroutines ...
	I0908 11:57:40.563272  811458 start.go:246] waiting for cluster config update ...
	I0908 11:57:40.563312  811458 start.go:255] writing updated cluster config ...
	I0908 11:57:40.563673  811458 ssh_runner.go:195] Run: rm -f paused
	I0908 11:57:40.570429  811458 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:57:40.574220  811458 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nvjls" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:40.580056  811458 pod_ready.go:94] pod "coredns-66bc5c9577-nvjls" is "Ready"
	I0908 11:57:40.580080  811458 pod_ready.go:86] duration metric: took 5.831172ms for pod "coredns-66bc5c9577-nvjls" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:40.584053  811458 pod_ready.go:83] waiting for pod "etcd-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.589215  811458 pod_ready.go:94] pod "etcd-no-preload-474007" is "Ready"
	I0908 11:57:41.589265  811458 pod_ready.go:86] duration metric: took 1.005188219s for pod "etcd-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.592034  811458 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.597858  811458 pod_ready.go:94] pod "kube-apiserver-no-preload-474007" is "Ready"
	I0908 11:57:41.597893  811458 pod_ready.go:86] duration metric: took 5.830632ms for pod "kube-apiserver-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.600546  811458 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.777006  811458 pod_ready.go:94] pod "kube-controller-manager-no-preload-474007" is "Ready"
	I0908 11:57:41.777036  811458 pod_ready.go:86] duration metric: took 176.468219ms for pod "kube-controller-manager-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.976524  811458 pod_ready.go:83] waiting for pod "kube-proxy-9fljr" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:42.375984  811458 pod_ready.go:94] pod "kube-proxy-9fljr" is "Ready"
	I0908 11:57:42.376021  811458 pod_ready.go:86] duration metric: took 399.459333ms for pod "kube-proxy-9fljr" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:42.576413  811458 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:42.975354  811458 pod_ready.go:94] pod "kube-scheduler-no-preload-474007" is "Ready"
	I0908 11:57:42.975387  811458 pod_ready.go:86] duration metric: took 398.943076ms for pod "kube-scheduler-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:42.975407  811458 pod_ready.go:40] duration metric: took 2.404937403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:57:43.028540  811458 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:57:43.030362  811458 out.go:179] * Done! kubectl is now configured to use "no-preload-474007" cluster and "default" namespace by default
	I0908 11:57:42.113167  811802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:57:42.151595  811802 node_ready.go:35] waiting up to 6m0s for node "embed-certs-256792" to be "Ready" ...
	I0908 11:57:42.154026  811802 node_ready.go:49] node "embed-certs-256792" is "Ready"
	I0908 11:57:42.154059  811802 node_ready.go:38] duration metric: took 2.406931ms for node "embed-certs-256792" to be "Ready" ...
	I0908 11:57:42.154073  811802 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:57:42.154122  811802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:57:42.179049  811802 api_server.go:72] duration metric: took 369.167387ms to wait for apiserver process to appear ...
	I0908 11:57:42.179073  811802 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:57:42.179095  811802 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0908 11:57:42.185770  811802 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0908 11:57:42.187424  811802 api_server.go:141] control plane version: v1.34.0
	I0908 11:57:42.187456  811802 api_server.go:131] duration metric: took 8.373725ms to wait for apiserver health ...
	I0908 11:57:42.187466  811802 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:57:42.191638  811802 system_pods.go:59] 8 kube-system pods found
	I0908 11:57:42.191666  811802 system_pods.go:61] "coredns-66bc5c9577-24xv6" [eb1ab4a7-273c-49a1-8d80-2e3145582e9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:57:42.191675  811802 system_pods.go:61] "etcd-embed-certs-256792" [5012dd79-f6a2-49b6-a6ba-e3cb31c0ab84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:57:42.191683  811802 system_pods.go:61] "kube-apiserver-embed-certs-256792" [d764f944-ceb8-4861-be25-e30f034a4c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:57:42.191705  811802 system_pods.go:61] "kube-controller-manager-embed-certs-256792" [63935a70-f702-45ee-9904-7d07ee903d79] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:57:42.191714  811802 system_pods.go:61] "kube-proxy-ph8c8" [bae0a504-7714-4c5b-af89-54a0f2d5c5fa] Running
	I0908 11:57:42.191720  811802 system_pods.go:61] "kube-scheduler-embed-certs-256792" [64de836d-209c-4fcb-91e5-a8266cd048c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:57:42.191725  811802 system_pods.go:61] "metrics-server-746fcd58dc-97dr2" [c00533cc-ec1a-45af-a5ee-4f3d7e77d95f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:57:42.191729  811802 system_pods.go:61] "storage-provisioner" [bb98e575-b5ce-4181-b7e5-9ea41fde8295] Running
	I0908 11:57:42.191735  811802 system_pods.go:74] duration metric: took 4.262676ms to wait for pod list to return data ...
	I0908 11:57:42.191745  811802 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:57:42.195175  811802 default_sa.go:45] found service account: "default"
	I0908 11:57:42.195203  811802 default_sa.go:55] duration metric: took 3.450712ms for default service account to be created ...
	I0908 11:57:42.195216  811802 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:57:42.200073  811802 system_pods.go:86] 8 kube-system pods found
	I0908 11:57:42.200097  811802 system_pods.go:89] "coredns-66bc5c9577-24xv6" [eb1ab4a7-273c-49a1-8d80-2e3145582e9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:57:42.200127  811802 system_pods.go:89] "etcd-embed-certs-256792" [5012dd79-f6a2-49b6-a6ba-e3cb31c0ab84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:57:42.200136  811802 system_pods.go:89] "kube-apiserver-embed-certs-256792" [d764f944-ceb8-4861-be25-e30f034a4c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:57:42.200143  811802 system_pods.go:89] "kube-controller-manager-embed-certs-256792" [63935a70-f702-45ee-9904-7d07ee903d79] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:57:42.200147  811802 system_pods.go:89] "kube-proxy-ph8c8" [bae0a504-7714-4c5b-af89-54a0f2d5c5fa] Running
	I0908 11:57:42.200152  811802 system_pods.go:89] "kube-scheduler-embed-certs-256792" [64de836d-209c-4fcb-91e5-a8266cd048c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:57:42.200157  811802 system_pods.go:89] "metrics-server-746fcd58dc-97dr2" [c00533cc-ec1a-45af-a5ee-4f3d7e77d95f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:57:42.200163  811802 system_pods.go:89] "storage-provisioner" [bb98e575-b5ce-4181-b7e5-9ea41fde8295] Running
	I0908 11:57:42.200171  811802 system_pods.go:126] duration metric: took 4.949191ms to wait for k8s-apps to be running ...
	I0908 11:57:42.200177  811802 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:57:42.200218  811802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:57:42.237702  811802 system_svc.go:56] duration metric: took 37.51307ms WaitForService to wait for kubelet
	I0908 11:57:42.237736  811802 kubeadm.go:578] duration metric: took 427.859269ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:57:42.237761  811802 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:57:42.243804  811802 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:57:42.243831  811802 node_conditions.go:123] node cpu capacity is 2
	I0908 11:57:42.243846  811802 node_conditions.go:105] duration metric: took 6.080641ms to run NodePressure ...
	I0908 11:57:42.243861  811802 start.go:241] waiting for startup goroutines ...
	I0908 11:57:42.273355  811802 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 11:57:42.273380  811802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 11:57:42.281406  811802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:57:42.293648  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 11:57:42.293677  811802 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 11:57:42.306147  811802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:57:42.331187  811802 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 11:57:42.331223  811802 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 11:57:42.361906  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 11:57:42.361934  811802 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 11:57:42.396707  811802 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:57:42.396744  811802 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 11:57:42.422129  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 11:57:42.422161  811802 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 11:57:42.446878  811802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:57:42.489551  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 11:57:42.489587  811802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 11:57:42.574460  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 11:57:42.574593  811802 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 11:57:42.656186  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 11:57:42.656213  811802 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 11:57:42.730937  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 11:57:42.730976  811802 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 11:57:42.801855  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 11:57:42.801878  811802 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 11:57:42.865770  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:57:42.865799  811802 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 11:57:42.937292  811802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:57:44.045454  811802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.739259027s)
	I0908 11:57:44.045539  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.045556  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.045571  811802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.764122817s)
	I0908 11:57:44.045629  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.045644  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.045886  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.045902  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.045912  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.045919  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.046002  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.046040  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.046072  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.046088  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.046096  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.046126  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.046145  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.046416  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.046422  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.046435  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.066630  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.066651  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.066953  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.066969  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.066973  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.158538  811802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.711611263s)
	I0908 11:57:44.158592  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.158604  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.158971  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.158994  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.159004  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.159012  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.159012  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.159263  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.159301  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.159349  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.159365  811802 addons.go:479] Verifying addon metrics-server=true in "embed-certs-256792"
	I0908 11:57:44.453332  811802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.515974628s)
	I0908 11:57:44.453409  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.453427  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.453789  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.453811  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.453825  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.453834  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.454113  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.454157  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.454167  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.457548  811802 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-256792 addons enable metrics-server
	
	I0908 11:57:44.459071  811802 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 11:57:43.572266  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:43.572778  812159 main.go:141] libmachine: (newest-cni-549052) DBG | unable to find current IP address of domain newest-cni-549052 in network mk-newest-cni-549052
	I0908 11:57:43.572830  812159 main.go:141] libmachine: (newest-cni-549052) DBG | I0908 11:57:43.572770  812218 retry.go:31] will retry after 4.203206967s: waiting for domain to come up
	I0908 11:57:44.460508  811802 addons.go:514] duration metric: took 2.650596404s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 11:57:44.460557  811802 start.go:246] waiting for cluster config update ...
	I0908 11:57:44.460590  811802 start.go:255] writing updated cluster config ...
	I0908 11:57:44.460866  811802 ssh_runner.go:195] Run: rm -f paused
	I0908 11:57:44.471864  811802 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:57:44.477590  811802 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24xv6" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 11:57:46.484885  811802 pod_ready.go:104] pod "coredns-66bc5c9577-24xv6" is not "Ready", error: <nil>
	I0908 11:57:43.013813  812547 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:57:43.013869  812547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 11:57:43.013882  812547 cache.go:58] Caching tarball of preloaded images
	I0908 11:57:43.013978  812547 preload.go:172] Found /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 11:57:43.013992  812547 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 11:57:43.014149  812547 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/config.json ...
	I0908 11:57:43.014393  812547 start.go:360] acquireMachinesLock for default-k8s-diff-port-149795: {Name:mkc620e3900da426b9c156141af1783a234a8bd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 11:57:49.235322  812547 start.go:364] duration metric: took 6.220859275s to acquireMachinesLock for "default-k8s-diff-port-149795"
	I0908 11:57:49.235413  812547 start.go:96] Skipping create...Using existing machine configuration
	I0908 11:57:49.235450  812547 fix.go:54] fixHost starting: 
	I0908 11:57:49.235913  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:57:49.235978  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:57:49.255609  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
	I0908 11:57:49.256215  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:57:49.256774  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:57:49.256800  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:57:49.257283  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:57:49.257495  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:57:49.257678  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:57:49.259525  812547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-149795: state=Stopped err=<nil>
	I0908 11:57:49.259552  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	W0908 11:57:49.259687  812547 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 11:57:47.779359  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.779924  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has current primary IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.779950  812159 main.go:141] libmachine: (newest-cni-549052) found domain IP: 192.168.72.253
	I0908 11:57:47.779964  812159 main.go:141] libmachine: (newest-cni-549052) reserving static IP address...
	I0908 11:57:47.780434  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "newest-cni-549052", mac: "52:54:00:c8:55:ce", ip: "192.168.72.253"} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:47.780479  812159 main.go:141] libmachine: (newest-cni-549052) DBG | skip adding static IP to network mk-newest-cni-549052 - found existing host DHCP lease matching {name: "newest-cni-549052", mac: "52:54:00:c8:55:ce", ip: "192.168.72.253"}
	I0908 11:57:47.780490  812159 main.go:141] libmachine: (newest-cni-549052) reserved static IP address 192.168.72.253 for domain newest-cni-549052
	I0908 11:57:47.780506  812159 main.go:141] libmachine: (newest-cni-549052) waiting for SSH...
	I0908 11:57:47.780516  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Getting to WaitForSSH function...
	I0908 11:57:47.782769  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.783108  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:47.783130  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.783270  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Using SSH client type: external
	I0908 11:57:47.783343  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Using SSH private key: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa (-rw-------)
	I0908 11:57:47.783383  812159 main.go:141] libmachine: (newest-cni-549052) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 11:57:47.783407  812159 main.go:141] libmachine: (newest-cni-549052) DBG | About to run SSH command:
	I0908 11:57:47.783418  812159 main.go:141] libmachine: (newest-cni-549052) DBG | exit 0
	I0908 11:57:47.913532  812159 main.go:141] libmachine: (newest-cni-549052) DBG | SSH cmd err, output: <nil>: 
	I0908 11:57:47.913970  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetConfigRaw
	I0908 11:57:47.914706  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetIP
	I0908 11:57:47.917266  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.917770  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:47.917807  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.918057  812159 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/config.json ...
	I0908 11:57:47.918245  812159 machine.go:93] provisionDockerMachine start ...
	I0908 11:57:47.918264  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:47.918487  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:47.920858  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.921217  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:47.921254  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.921367  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:47.921527  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:47.921672  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:47.921787  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:47.921932  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:47.922188  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:47.922201  812159 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:57:48.042912  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:57:48.042946  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetMachineName
	I0908 11:57:48.043249  812159 buildroot.go:166] provisioning hostname "newest-cni-549052"
	I0908 11:57:48.043281  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetMachineName
	I0908 11:57:48.043490  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.046580  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.046968  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.046999  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.047171  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.047362  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.047546  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.047673  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.047836  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:48.048117  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:48.048134  812159 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-549052 && echo "newest-cni-549052" | sudo tee /etc/hostname
	I0908 11:57:48.187686  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-549052
	
	I0908 11:57:48.187712  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.190842  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.191117  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.191147  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.191293  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.191523  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.191671  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.191823  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.192009  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:48.192281  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:48.192305  812159 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-549052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-549052/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-549052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:57:48.321564  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:57:48.321597  812159 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21503-748170/.minikube CaCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21503-748170/.minikube}
	I0908 11:57:48.321630  812159 buildroot.go:174] setting up certificates
	I0908 11:57:48.321640  812159 provision.go:84] configureAuth start
	I0908 11:57:48.321648  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetMachineName
	I0908 11:57:48.321954  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetIP
	I0908 11:57:48.325174  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.325709  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.325733  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.325937  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.328870  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.329300  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.329342  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.329486  812159 provision.go:143] copyHostCerts
	I0908 11:57:48.329580  812159 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem, removing ...
	I0908 11:57:48.329603  812159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem
	I0908 11:57:48.329674  812159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem (1078 bytes)
	I0908 11:57:48.329823  812159 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem, removing ...
	I0908 11:57:48.329838  812159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem
	I0908 11:57:48.329872  812159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem (1123 bytes)
	I0908 11:57:48.329984  812159 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem, removing ...
	I0908 11:57:48.329997  812159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem
	I0908 11:57:48.330028  812159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem (1675 bytes)
	I0908 11:57:48.330118  812159 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem org=jenkins.newest-cni-549052 san=[127.0.0.1 192.168.72.253 localhost minikube newest-cni-549052]
	I0908 11:57:48.491599  812159 provision.go:177] copyRemoteCerts
	I0908 11:57:48.491674  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:57:48.491700  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.494839  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.495296  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.495327  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.495533  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.495725  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.495887  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.496027  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:57:48.585972  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 11:57:48.619847  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 11:57:48.649609  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 11:57:48.684698  812159 provision.go:87] duration metric: took 363.041145ms to configureAuth
	I0908 11:57:48.684734  812159 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:57:48.684978  812159 config.go:182] Loaded profile config "newest-cni-549052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:57:48.685089  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.687895  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.688419  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.688453  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.688668  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.688897  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.689047  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.689187  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.689353  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:48.689559  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:48.689576  812159 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:57:48.959457  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:57:48.959506  812159 machine.go:96] duration metric: took 1.041228522s to provisionDockerMachine
	I0908 11:57:48.959523  812159 start.go:293] postStartSetup for "newest-cni-549052" (driver="kvm2")
	I0908 11:57:48.959538  812159 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:57:48.959561  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:48.959971  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:57:48.960004  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.963119  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.963623  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.963654  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.963775  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.964031  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.964226  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.964436  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:57:49.059224  812159 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:57:49.064132  812159 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:57:49.064163  812159 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/addons for local assets ...
	I0908 11:57:49.064224  812159 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/files for local assets ...
	I0908 11:57:49.064305  812159 filesync.go:149] local asset: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem -> 7523322.pem in /etc/ssl/certs
	I0908 11:57:49.064411  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:57:49.076217  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:57:49.105831  812159 start.go:296] duration metric: took 146.290104ms for postStartSetup
	I0908 11:57:49.105875  812159 fix.go:56] duration metric: took 23.926590374s for fixHost
	I0908 11:57:49.105902  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:49.108745  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.109088  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:49.109118  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.109350  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:49.109583  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:49.109754  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:49.109896  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:49.110082  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:49.110306  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:49.110322  812159 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:57:49.235107  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757332669.209146390
	
	I0908 11:57:49.235139  812159 fix.go:216] guest clock: 1757332669.209146390
	I0908 11:57:49.235150  812159 fix.go:229] Guest: 2025-09-08 11:57:49.20914639 +0000 UTC Remote: 2025-09-08 11:57:49.105879402 +0000 UTC m=+28.497071736 (delta=103.266988ms)
	I0908 11:57:49.235197  812159 fix.go:200] guest clock delta is within tolerance: 103.266988ms
	I0908 11:57:49.235207  812159 start.go:83] releasing machines lock for "newest-cni-549052", held for 24.055963092s
	I0908 11:57:49.235243  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:49.235556  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetIP
	I0908 11:57:49.239174  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.239613  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:49.239655  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.239860  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:49.240442  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:49.240632  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:49.240739  812159 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:57:49.240780  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:49.240870  812159 ssh_runner.go:195] Run: cat /version.json
	I0908 11:57:49.240898  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:49.244158  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.244589  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.244715  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:49.244769  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.244867  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:49.244977  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:49.244997  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.245067  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:49.245267  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:49.245320  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:49.245418  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:49.245482  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:57:49.245890  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:49.246153  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:57:49.362405  812159 ssh_runner.go:195] Run: systemctl --version
	I0908 11:57:49.370661  812159 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:57:49.528289  812159 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 11:57:49.538684  812159 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 11:57:49.538751  812159 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:57:49.565563  812159 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:57:49.565593  812159 start.go:495] detecting cgroup driver to use...
	I0908 11:57:49.565761  812159 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:57:49.592350  812159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:57:49.613551  812159 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:57:49.613689  812159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:57:49.632732  812159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:57:49.651906  812159 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:57:49.834745  812159 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:57:50.039957  812159 docker.go:234] disabling docker service ...
	I0908 11:57:50.040032  812159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:57:50.061560  812159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:57:50.081022  812159 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:57:50.339178  812159 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:57:50.552105  812159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:57:50.576407  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:57:50.608406  812159 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 11:57:50.608591  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.626566  812159 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:57:50.626768  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.647898  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.663558  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.680626  812159 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:57:50.701014  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.719402  812159 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.746088  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.764464  812159 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:57:50.779626  812159 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:57:50.779714  812159 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:57:50.809097  812159 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:57:50.827846  812159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:57:51.008407  812159 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:57:51.171326  812159 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:57:51.171444  812159 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:57:51.178059  812159 start.go:563] Will wait 60s for crictl version
	I0908 11:57:51.178133  812159 ssh_runner.go:195] Run: which crictl
	I0908 11:57:51.183452  812159 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:57:51.240830  812159 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 11:57:51.240940  812159 ssh_runner.go:195] Run: crio --version
	I0908 11:57:51.286378  812159 ssh_runner.go:195] Run: crio --version
	I0908 11:57:51.338118  812159 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 11:57:51.339143  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetIP
	I0908 11:57:51.342963  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:51.343507  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:51.343536  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:51.343813  812159 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0908 11:57:51.351152  812159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:57:51.375657  812159 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0908 11:57:47.983972  811802 pod_ready.go:94] pod "coredns-66bc5c9577-24xv6" is "Ready"
	I0908 11:57:47.984004  811802 pod_ready.go:86] duration metric: took 3.506380235s for pod "coredns-66bc5c9577-24xv6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:47.993093  811802 pod_ready.go:83] waiting for pod "etcd-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 11:57:50.014039  811802 pod_ready.go:104] pod "etcd-embed-certs-256792" is not "Ready", error: <nil>
	I0908 11:57:49.261606  812547 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-149795" ...
	I0908 11:57:49.261638  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Start
	I0908 11:57:49.261799  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) starting domain...
	I0908 11:57:49.261822  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) ensuring networks are active...
	I0908 11:57:49.262614  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Ensuring network default is active
	I0908 11:57:49.262968  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Ensuring network mk-default-k8s-diff-port-149795 is active
	I0908 11:57:49.263618  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) getting domain XML...
	I0908 11:57:49.265935  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) creating domain...
	I0908 11:57:50.935105  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) waiting for IP...
	I0908 11:57:50.936449  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:50.937152  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:50.937273  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:50.937152  812639 retry.go:31] will retry after 249.327002ms: waiting for domain to come up
	I0908 11:57:51.188178  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.189053  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.189248  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:51.189137  812639 retry.go:31] will retry after 265.912093ms: waiting for domain to come up
	I0908 11:57:51.456953  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.458188  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.458219  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:51.458148  812639 retry.go:31] will retry after 343.506787ms: waiting for domain to come up
	I0908 11:57:51.803902  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.804520  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.804548  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:51.804515  812639 retry.go:31] will retry after 600.967003ms: waiting for domain to come up
	I0908 11:57:52.407376  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:52.408082  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:52.408114  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:52.408066  812639 retry.go:31] will retry after 613.161152ms: waiting for domain to come up
	I0908 11:57:51.377153  812159 kubeadm.go:875] updating cluster {Name:newest-cni-549052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.0 ClusterName:newest-cni-549052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.253 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:57:51.377322  812159 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:57:51.377394  812159 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:57:51.440665  812159 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0908 11:57:51.440801  812159 ssh_runner.go:195] Run: which lz4
	I0908 11:57:51.446425  812159 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 11:57:51.453028  812159 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 11:57:51.453068  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0908 11:57:53.596159  812159 crio.go:462] duration metric: took 2.149798464s to copy over tarball
	I0908 11:57:53.596368  812159 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W0908 11:57:52.876879  811802 pod_ready.go:104] pod "etcd-embed-certs-256792" is not "Ready", error: <nil>
	I0908 11:57:54.526011  811802 pod_ready.go:94] pod "etcd-embed-certs-256792" is "Ready"
	I0908 11:57:54.526052  811802 pod_ready.go:86] duration metric: took 6.532925098s for pod "etcd-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:54.537288  811802 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:54.556382  811802 pod_ready.go:94] pod "kube-apiserver-embed-certs-256792" is "Ready"
	I0908 11:57:54.556421  811802 pod_ready.go:86] duration metric: took 19.098237ms for pod "kube-apiserver-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:54.564214  811802 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.082949  811802 pod_ready.go:94] pod "kube-controller-manager-embed-certs-256792" is "Ready"
	I0908 11:57:55.082992  811802 pod_ready.go:86] duration metric: took 518.748647ms for pod "kube-controller-manager-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.090074  811802 pod_ready.go:83] waiting for pod "kube-proxy-ph8c8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.112048  811802 pod_ready.go:94] pod "kube-proxy-ph8c8" is "Ready"
	I0908 11:57:55.112141  811802 pod_ready.go:86] duration metric: took 22.036176ms for pod "kube-proxy-ph8c8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.299989  811802 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.700912  811802 pod_ready.go:94] pod "kube-scheduler-embed-certs-256792" is "Ready"
	I0908 11:57:55.701001  811802 pod_ready.go:86] duration metric: took 400.973642ms for pod "kube-scheduler-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.701031  811802 pod_ready.go:40] duration metric: took 11.229130008s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:57:55.783175  811802 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:57:55.785247  811802 out.go:179] * Done! kubectl is now configured to use "embed-certs-256792" cluster and "default" namespace by default
	I0908 11:57:53.022495  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:53.023198  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:53.023226  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:53.023163  812639 retry.go:31] will retry after 728.029384ms: waiting for domain to come up
	I0908 11:57:53.752306  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:53.752621  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:53.752646  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:53.752591  812639 retry.go:31] will retry after 871.524139ms: waiting for domain to come up
	I0908 11:57:54.625864  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:54.626780  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:54.626808  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:54.626664  812639 retry.go:31] will retry after 1.229648452s: waiting for domain to come up
	I0908 11:57:55.858560  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:55.859312  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:55.859345  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:55.859213  812639 retry.go:31] will retry after 1.332770377s: waiting for domain to come up
	I0908 11:57:57.194137  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:57.194904  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:57.194937  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:57.194805  812639 retry.go:31] will retry after 1.80848352s: waiting for domain to come up
	I0908 11:57:55.970733  812159 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.374312152s)
	I0908 11:57:55.970795  812159 crio.go:469] duration metric: took 2.374516649s to extract the tarball
	I0908 11:57:55.970807  812159 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 11:57:56.059942  812159 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:57:56.124762  812159 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:57:56.124795  812159 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:57:56.124807  812159 kubeadm.go:926] updating node { 192.168.72.253 8443 v1.34.0 crio true true} ...
	I0908 11:57:56.124970  812159 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-549052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-549052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:57:56.125067  812159 ssh_runner.go:195] Run: crio config
	I0908 11:57:56.196101  812159 cni.go:84] Creating CNI manager for ""
	I0908 11:57:56.196127  812159 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:57:56.196149  812159 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0908 11:57:56.196180  812159 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.253 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-549052 NodeName:newest-cni-549052 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:57:56.196346  812159 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-549052"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.253"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.253"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:57:56.196418  812159 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:57:56.215211  812159 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:57:56.215376  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:57:56.234276  812159 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0908 11:57:56.271439  812159 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:57:56.305092  812159 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0908 11:57:56.334775  812159 ssh_runner.go:195] Run: grep 192.168.72.253	control-plane.minikube.internal$ /etc/hosts
	I0908 11:57:56.355649  812159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:57:56.376879  812159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:57:56.607540  812159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:57:56.638865  812159 certs.go:68] Setting up /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052 for IP: 192.168.72.253
	I0908 11:57:56.638898  812159 certs.go:194] generating shared ca certs ...
	I0908 11:57:56.638925  812159 certs.go:226] acquiring lock for ca certs: {Name:mkaa8fe7cb1fe9bdb745b85589d42151c557e20e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:57:56.639125  812159 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key
	I0908 11:57:56.639185  812159 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key
	I0908 11:57:56.639203  812159 certs.go:256] generating profile certs ...
	I0908 11:57:56.639330  812159 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/client.key
	I0908 11:57:56.639405  812159 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/apiserver.key.23d252d4
	I0908 11:57:56.639459  812159 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/proxy-client.key
	I0908 11:57:56.639640  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem (1338 bytes)
	W0908 11:57:56.639687  812159 certs.go:480] ignoring /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332_empty.pem, impossibly tiny 0 bytes
	I0908 11:57:56.639696  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 11:57:56.639735  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem (1078 bytes)
	I0908 11:57:56.639776  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:57:56.639806  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem (1675 bytes)
	I0908 11:57:56.639866  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:57:56.645836  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:57:56.704224  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:57:56.757396  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:57:56.802349  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 11:57:56.845704  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 11:57:56.890847  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 11:57:56.939028  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:57:56.977212  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 11:57:57.015764  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /usr/share/ca-certificates/7523322.pem (1708 bytes)
	I0908 11:57:57.053427  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:57:57.091952  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem --> /usr/share/ca-certificates/752332.pem (1338 bytes)
	I0908 11:57:57.133084  812159 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:57:57.160776  812159 ssh_runner.go:195] Run: openssl version
	I0908 11:57:57.168578  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752332.pem && ln -fs /usr/share/ca-certificates/752332.pem /etc/ssl/certs/752332.pem"
	I0908 11:57:57.187551  812159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752332.pem
	I0908 11:57:57.195393  812159 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:41 /usr/share/ca-certificates/752332.pem
	I0908 11:57:57.195470  812159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752332.pem
	I0908 11:57:57.208192  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752332.pem /etc/ssl/certs/51391683.0"
	I0908 11:57:57.230215  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7523322.pem && ln -fs /usr/share/ca-certificates/7523322.pem /etc/ssl/certs/7523322.pem"
	I0908 11:57:57.245062  812159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7523322.pem
	I0908 11:57:57.251266  812159 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:41 /usr/share/ca-certificates/7523322.pem
	I0908 11:57:57.251400  812159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7523322.pem
	I0908 11:57:57.259619  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7523322.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:57:57.275438  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:57:57.291284  812159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:57:57.297096  812159 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:57:57.297171  812159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:57:57.304774  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:57:57.320721  812159 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:57:57.326703  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 11:57:57.337457  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 11:57:57.345786  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 11:57:57.355892  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 11:57:57.365198  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 11:57:57.378654  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 11:57:57.387545  812159 kubeadm.go:392] StartCluster: {Name:newest-cni-549052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.0 ClusterName:newest-cni-549052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.253 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:57:57.387656  812159 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:57:57.387758  812159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:57:57.464500  812159 cri.go:89] found id: ""
	I0908 11:57:57.464693  812159 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:57:57.486928  812159 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 11:57:57.487025  812159 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 11:57:57.487123  812159 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 11:57:57.510272  812159 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:57:57.511210  812159 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-549052" does not appear in /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:57:57.511776  812159 kubeconfig.go:62] /home/jenkins/minikube-integration/21503-748170/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-549052" cluster setting kubeconfig missing "newest-cni-549052" context setting]
	I0908 11:57:57.512553  812159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:57:57.623218  812159 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 11:57:57.637757  812159 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.72.253
	I0908 11:57:57.637811  812159 kubeadm.go:1152] stopping kube-system containers ...
	I0908 11:57:57.637832  812159 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0908 11:57:57.637908  812159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:57:57.691283  812159 cri.go:89] found id: ""
	I0908 11:57:57.691379  812159 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 11:57:57.716106  812159 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:57:57.731578  812159 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:57:57.731603  812159 kubeadm.go:157] found existing configuration files:
	
	I0908 11:57:57.731664  812159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 11:57:57.746531  812159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:57:57.746608  812159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:57:57.759257  812159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 11:57:57.772840  812159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:57:57.772906  812159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:57:57.786965  812159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 11:57:57.800254  812159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:57:57.800338  812159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:57:57.812215  812159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 11:57:57.823082  812159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:57:57.823148  812159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:57:57.835098  812159 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:57:57.847109  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:57.918103  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:59.272863  812159 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.354713833s)
	I0908 11:57:59.272905  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:59.651778  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:59.764562  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:59.901757  812159 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:57:59.901864  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:00.402003  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:57:59.005793  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:59.006326  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:59.006354  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:59.006279  812639 retry.go:31] will retry after 2.473556197s: waiting for domain to come up
	I0908 11:58:01.481350  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:01.482159  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:58:01.482187  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:58:01.482010  812639 retry.go:31] will retry after 2.823753092s: waiting for domain to come up
	I0908 11:58:00.902932  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:01.402528  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:01.464965  812159 api_server.go:72] duration metric: took 1.563207193s to wait for apiserver process to appear ...
	I0908 11:58:01.465008  812159 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:58:01.465038  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:01.465860  812159 api_server.go:269] stopped: https://192.168.72.253:8443/healthz: Get "https://192.168.72.253:8443/healthz": dial tcp 192.168.72.253:8443: connect: connection refused
	I0908 11:58:01.965402  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:05.019304  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:58:05.019353  812159 api_server.go:103] status: https://192.168.72.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:58:05.019374  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:05.081963  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:58:05.081995  812159 api_server.go:103] status: https://192.168.72.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:58:05.466024  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:05.471201  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:05.471232  812159 api_server.go:103] status: https://192.168.72.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:05.965877  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:05.984640  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:05.984751  812159 api_server.go:103] status: https://192.168.72.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:06.465387  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:06.474486  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 200:
	ok
	I0908 11:58:06.487363  812159 api_server.go:141] control plane version: v1.34.0
	I0908 11:58:06.487395  812159 api_server.go:131] duration metric: took 5.022379369s to wait for apiserver health ...
	I0908 11:58:06.487406  812159 cni.go:84] Creating CNI manager for ""
	I0908 11:58:06.487413  812159 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:58:06.488862  812159 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 11:58:06.490427  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 11:58:06.532385  812159 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 11:58:06.584116  812159 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:58:06.591628  812159 system_pods.go:59] 8 kube-system pods found
	I0908 11:58:06.591696  812159 system_pods.go:61] "coredns-66bc5c9577-k9fz2" [56b6d720-5155-4da7-b02f-fd0a70f84b08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:06.591710  812159 system_pods.go:61] "etcd-newest-cni-549052" [6f3da5eb-9f20-4a77-b5c4-62c1b9a274d7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:06.591719  812159 system_pods.go:61] "kube-apiserver-newest-cni-549052" [b4dfa754-a3c0-4462-a3e9-e8c9826f82b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:06.591728  812159 system_pods.go:61] "kube-controller-manager-newest-cni-549052" [4527e053-5868-4788-a6d8-09c02292d1a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:06.591735  812159 system_pods.go:61] "kube-proxy-n9kwb" [9d23138f-39c4-4ffa-8e33-e2f0eaea4051] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 11:58:06.591744  812159 system_pods.go:61] "kube-scheduler-newest-cni-549052" [376f1898-d179-4213-82dc-6eb522068d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:06.591752  812159 system_pods.go:61] "metrics-server-746fcd58dc-4jzrw" [295f1a4b-153b-4b9c-bfb1-38a153f63a87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:06.591764  812159 system_pods.go:61] "storage-provisioner" [5e5d3e8c-59de-401f-bb0a-4ab29de93cdf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 11:58:06.591774  812159 system_pods.go:74] duration metric: took 7.627257ms to wait for pod list to return data ...
	I0908 11:58:06.591792  812159 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:58:06.598308  812159 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:58:06.598355  812159 node_conditions.go:123] node cpu capacity is 2
	I0908 11:58:06.598377  812159 node_conditions.go:105] duration metric: took 6.573897ms to run NodePressure ...
	I0908 11:58:06.598403  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:06.918992  812159 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:58:06.937515  812159 ops.go:34] apiserver oom_adj: -16
	I0908 11:58:06.937544  812159 kubeadm.go:593] duration metric: took 9.450498392s to restartPrimaryControlPlane
	I0908 11:58:06.937557  812159 kubeadm.go:394] duration metric: took 9.550021975s to StartCluster
	I0908 11:58:06.937584  812159 settings.go:142] acquiring lock: {Name:mk18c67e9470bbfdfeaf7a5d3ce5d7a1813bc966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:06.937686  812159 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:58:06.939400  812159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:06.939706  812159 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.253 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:58:06.939802  812159 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 11:58:06.939909  812159 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-549052"
	I0908 11:58:06.939931  812159 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-549052"
	W0908 11:58:06.939945  812159 addons.go:247] addon storage-provisioner should already be in state true
	I0908 11:58:06.939971  812159 addons.go:69] Setting default-storageclass=true in profile "newest-cni-549052"
	I0908 11:58:06.940015  812159 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-549052"
	I0908 11:58:06.939997  812159 addons.go:69] Setting metrics-server=true in profile "newest-cni-549052"
	I0908 11:58:06.940034  812159 addons.go:238] Setting addon metrics-server=true in "newest-cni-549052"
	W0908 11:58:06.940043  812159 addons.go:247] addon metrics-server should already be in state true
	I0908 11:58:06.940050  812159 config.go:182] Loaded profile config "newest-cni-549052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:58:06.940088  812159 host.go:66] Checking if "newest-cni-549052" exists ...
	I0908 11:58:06.939980  812159 host.go:66] Checking if "newest-cni-549052" exists ...
	I0908 11:58:06.940490  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.940504  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.939977  812159 addons.go:69] Setting dashboard=true in profile "newest-cni-549052"
	I0908 11:58:06.940524  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.940526  812159 addons.go:238] Setting addon dashboard=true in "newest-cni-549052"
	W0908 11:58:06.940537  812159 addons.go:247] addon dashboard should already be in state true
	I0908 11:58:06.940557  812159 host.go:66] Checking if "newest-cni-549052" exists ...
	I0908 11:58:06.940557  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.940596  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.940720  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.940872  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.940971  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.941152  812159 out.go:179] * Verifying Kubernetes components...
	I0908 11:58:06.942460  812159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:58:06.960366  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0908 11:58:06.960526  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0908 11:58:06.960592  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0908 11:58:06.961045  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.961055  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.961093  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33949
	I0908 11:58:06.961132  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.961638  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.961661  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.961738  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.961758  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.961821  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.961839  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.961841  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.962218  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.962258  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.962276  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.962358  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.962367  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.962834  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.962836  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.962872  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.962886  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.963106  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.963156  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:06.963591  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.963624  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.966128  812159 addons.go:238] Setting addon default-storageclass=true in "newest-cni-549052"
	W0908 11:58:06.966146  812159 addons.go:247] addon default-storageclass should already be in state true
	I0908 11:58:06.966174  812159 host.go:66] Checking if "newest-cni-549052" exists ...
	I0908 11:58:06.966448  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.966471  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.982657  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0908 11:58:06.983247  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.983304  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0908 11:58:06.984118  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.984140  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.984338  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.984433  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I0908 11:58:06.984604  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.984817  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:06.984922  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.984992  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.985062  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.985634  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.985823  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:06.986269  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.986294  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.986675  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0908 11:58:06.987058  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.987115  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.987598  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.987626  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.987711  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:06.988003  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.988697  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.988741  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.989378  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:58:06.989781  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:58:06.990188  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:58:06.990896  812159 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 11:58:06.991714  812159 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:58:06.991751  812159 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 11:58:06.992503  812159 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 11:58:06.992518  812159 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 11:58:06.992539  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:58:06.993213  812159 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:58:06.993260  812159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:58:06.993282  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:58:06.994155  812159 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 11:58:04.307776  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:04.308591  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:58:04.308621  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:58:04.308482  812639 retry.go:31] will retry after 3.169091318s: waiting for domain to come up
	I0908 11:58:07.481840  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:07.482455  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:58:07.482485  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:58:07.482426  812639 retry.go:31] will retry after 4.873827649s: waiting for domain to come up
	I0908 11:58:06.995651  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 11:58:06.995667  812159 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 11:58:06.995682  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:58:06.996803  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:06.997568  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:58:06.997599  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:06.998621  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:06.998653  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:58:06.998668  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:58:06.998726  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:06.998826  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:58:06.998890  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:58:06.998952  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:58:06.999129  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:58:06.999252  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:58:06.999398  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:58:06.999420  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:58:06.999766  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:07.000335  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:58:07.000379  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:07.000623  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:58:07.000819  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:58:07.001009  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:58:07.001150  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:58:07.027364  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0908 11:58:07.027938  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:07.028543  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:07.028576  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:07.029074  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:07.029397  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:07.031401  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:58:07.031636  812159 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:58:07.031654  812159 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:58:07.031674  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:58:07.034636  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:07.035102  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:58:07.035128  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:07.035453  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:58:07.035616  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:58:07.035764  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:58:07.035886  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:58:07.316120  812159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:58:07.341349  812159 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:58:07.341448  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:07.379877  812159 api_server.go:72] duration metric: took 440.130445ms to wait for apiserver process to appear ...
	I0908 11:58:07.379918  812159 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:58:07.379944  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:07.394846  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 200:
	ok
	I0908 11:58:07.396315  812159 api_server.go:141] control plane version: v1.34.0
	I0908 11:58:07.396353  812159 api_server.go:131] duration metric: took 16.426352ms to wait for apiserver health ...
	I0908 11:58:07.396369  812159 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:58:07.403249  812159 system_pods.go:59] 8 kube-system pods found
	I0908 11:58:07.403277  812159 system_pods.go:61] "coredns-66bc5c9577-k9fz2" [56b6d720-5155-4da7-b02f-fd0a70f84b08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:07.403284  812159 system_pods.go:61] "etcd-newest-cni-549052" [6f3da5eb-9f20-4a77-b5c4-62c1b9a274d7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:07.403294  812159 system_pods.go:61] "kube-apiserver-newest-cni-549052" [b4dfa754-a3c0-4462-a3e9-e8c9826f82b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:07.403300  812159 system_pods.go:61] "kube-controller-manager-newest-cni-549052" [4527e053-5868-4788-a6d8-09c02292d1a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:07.403304  812159 system_pods.go:61] "kube-proxy-n9kwb" [9d23138f-39c4-4ffa-8e33-e2f0eaea4051] Running
	I0908 11:58:07.403310  812159 system_pods.go:61] "kube-scheduler-newest-cni-549052" [376f1898-d179-4213-82dc-6eb522068d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:07.403318  812159 system_pods.go:61] "metrics-server-746fcd58dc-4jzrw" [295f1a4b-153b-4b9c-bfb1-38a153f63a87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:07.403322  812159 system_pods.go:61] "storage-provisioner" [5e5d3e8c-59de-401f-bb0a-4ab29de93cdf] Running
	I0908 11:58:07.403327  812159 system_pods.go:74] duration metric: took 6.945833ms to wait for pod list to return data ...
	I0908 11:58:07.403335  812159 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:58:07.408141  812159 default_sa.go:45] found service account: "default"
	I0908 11:58:07.408173  812159 default_sa.go:55] duration metric: took 4.831114ms for default service account to be created ...
	I0908 11:58:07.408192  812159 kubeadm.go:578] duration metric: took 468.452171ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0908 11:58:07.408219  812159 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:58:07.413477  812159 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:58:07.413502  812159 node_conditions.go:123] node cpu capacity is 2
	I0908 11:58:07.413517  812159 node_conditions.go:105] duration metric: took 5.291281ms to run NodePressure ...
	I0908 11:58:07.413533  812159 start.go:241] waiting for startup goroutines ...
	I0908 11:58:07.585717  812159 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 11:58:07.585746  812159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 11:58:07.590627  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 11:58:07.590649  812159 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 11:58:07.615471  812159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:58:07.617433  812159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:58:07.637921  812159 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 11:58:07.637960  812159 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 11:58:07.658519  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 11:58:07.658558  812159 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 11:58:07.716711  812159 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:58:07.716747  812159 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 11:58:07.741200  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 11:58:07.741272  812159 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 11:58:07.784300  812159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:58:07.838089  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 11:58:07.838113  812159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 11:58:07.916483  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 11:58:07.916517  812159 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 11:58:07.994434  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 11:58:07.994467  812159 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 11:58:08.034252  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:08.034281  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:08.034576  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:08.034596  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:08.034606  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:08.034614  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:08.034890  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:08.034913  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:08.034914  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Closing plugin on server side
	I0908 11:58:08.051590  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:08.051616  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:08.051947  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Closing plugin on server side
	I0908 11:58:08.052003  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:08.052014  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:08.077806  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 11:58:08.077837  812159 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 11:58:08.158787  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 11:58:08.158817  812159 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 11:58:08.222357  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:58:08.222389  812159 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 11:58:08.273376  812159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:58:09.418444  812159 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.634102035s)
	I0908 11:58:09.418507  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.418522  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.418753  812159 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.801281927s)
	I0908 11:58:09.418793  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.418806  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.418823  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.418841  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.418854  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.418863  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.419030  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.419041  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.419057  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.419063  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.420923  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Closing plugin on server side
	I0908 11:58:09.420930  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Closing plugin on server side
	I0908 11:58:09.420929  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.420954  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.420966  812159 addons.go:479] Verifying addon metrics-server=true in "newest-cni-549052"
	I0908 11:58:09.420929  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.421023  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.719509  812159 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.446049393s)
	I0908 11:58:09.719603  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.719621  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.719989  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.720009  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.720020  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.720029  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.720303  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.720337  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.721883  812159 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-549052 addons enable metrics-server
	
	I0908 11:58:09.723171  812159 out.go:179] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0908 11:58:09.724322  812159 addons.go:514] duration metric: took 2.784529073s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0908 11:58:09.724366  812159 start.go:246] waiting for cluster config update ...
	I0908 11:58:09.724394  812159 start.go:255] writing updated cluster config ...
	I0908 11:58:09.724722  812159 ssh_runner.go:195] Run: rm -f paused
	I0908 11:58:09.776981  812159 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:58:09.778629  812159 out.go:179] * Done! kubectl is now configured to use "newest-cni-549052" cluster and "default" namespace by default
	I0908 11:58:12.358330  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.359376  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has current primary IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.359409  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) found domain IP: 192.168.39.109
	I0908 11:58:12.359424  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) reserving static IP address...
	I0908 11:58:12.359947  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-149795", mac: "52:54:00:92:f9:54", ip: "192.168.39.109"} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.359979  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | skip adding static IP to network mk-default-k8s-diff-port-149795 - found existing host DHCP lease matching {name: "default-k8s-diff-port-149795", mac: "52:54:00:92:f9:54", ip: "192.168.39.109"}
	I0908 11:58:12.359995  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Getting to WaitForSSH function...
	I0908 11:58:12.360170  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) reserved static IP address 192.168.39.109 for domain default-k8s-diff-port-149795
	I0908 11:58:12.360214  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) waiting for SSH...
	I0908 11:58:12.362949  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.363320  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.363348  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.363501  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Using SSH client type: external
	I0908 11:58:12.363526  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Using SSH private key: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa (-rw-------)
	I0908 11:58:12.363569  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 11:58:12.363582  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | About to run SSH command:
	I0908 11:58:12.363595  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | exit 0
	I0908 11:58:12.498804  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | SSH cmd err, output: <nil>: 
	I0908 11:58:12.499008  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetConfigRaw
	I0908 11:58:12.499663  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetIP
	I0908 11:58:12.502215  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.502705  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.502730  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.503040  812547 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/config.json ...
	I0908 11:58:12.505722  812547 machine.go:93] provisionDockerMachine start ...
	I0908 11:58:12.505749  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:12.505925  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:12.508541  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.508928  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.508949  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.509087  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:12.509266  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.509427  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.509590  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:12.509821  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:12.510143  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:12.510161  812547 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:58:12.622290  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:58:12.622341  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetMachineName
	I0908 11:58:12.622622  812547 buildroot.go:166] provisioning hostname "default-k8s-diff-port-149795"
	I0908 11:58:12.622653  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetMachineName
	I0908 11:58:12.622830  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:12.626479  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.627086  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.627120  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.627378  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:12.627571  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.627800  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.628005  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:12.628187  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:12.628461  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:12.628479  812547 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-149795 && echo "default-k8s-diff-port-149795" | sudo tee /etc/hostname
	I0908 11:58:12.763725  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-149795
	
	I0908 11:58:12.763757  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:12.767404  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.767913  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.767939  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:12.767953  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.768136  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.768258  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.768348  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:12.768475  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:12.768760  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:12.768789  812547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-149795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-149795/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-149795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:58:12.898758  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:58:12.898806  812547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21503-748170/.minikube CaCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21503-748170/.minikube}
	I0908 11:58:12.898834  812547 buildroot.go:174] setting up certificates
	I0908 11:58:12.898847  812547 provision.go:84] configureAuth start
	I0908 11:58:12.898860  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetMachineName
	I0908 11:58:12.899183  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetIP
	I0908 11:58:12.902652  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.903213  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.903270  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.903577  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:12.906329  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.906718  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.906750  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.906913  812547 provision.go:143] copyHostCerts
	I0908 11:58:12.906986  812547 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem, removing ...
	I0908 11:58:12.907006  812547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem
	I0908 11:58:12.907087  812547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem (1123 bytes)
	I0908 11:58:12.907208  812547 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem, removing ...
	I0908 11:58:12.907219  812547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem
	I0908 11:58:12.907251  812547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem (1675 bytes)
	I0908 11:58:12.907328  812547 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem, removing ...
	I0908 11:58:12.907337  812547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem
	I0908 11:58:12.907365  812547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem (1078 bytes)
	I0908 11:58:12.907442  812547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-149795 san=[127.0.0.1 192.168.39.109 default-k8s-diff-port-149795 localhost minikube]
	I0908 11:58:13.071967  812547 provision.go:177] copyRemoteCerts
	I0908 11:58:13.072035  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:58:13.072063  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.075619  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.076095  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.076133  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.076317  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.076518  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.076702  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.076862  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:13.166933  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 11:58:13.208896  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0908 11:58:13.249356  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 11:58:13.288748  812547 provision.go:87] duration metric: took 389.88528ms to configureAuth
	I0908 11:58:13.288777  812547 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:58:13.289019  812547 config.go:182] Loaded profile config "default-k8s-diff-port-149795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:58:13.289136  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.292902  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.293282  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.293301  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.293603  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.293758  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.293867  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.293971  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.294218  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:13.294511  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:13.294536  812547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:58:13.578099  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:58:13.578130  812547 machine.go:96] duration metric: took 1.072389547s to provisionDockerMachine
	I0908 11:58:13.578143  812547 start.go:293] postStartSetup for "default-k8s-diff-port-149795" (driver="kvm2")
	I0908 11:58:13.578163  812547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:58:13.578195  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.578523  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:58:13.578555  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.581884  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.582293  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.582318  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.582623  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.582829  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.582964  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.583073  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:13.677522  812547 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:58:13.683800  812547 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:58:13.683830  812547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/addons for local assets ...
	I0908 11:58:13.683893  812547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/files for local assets ...
	I0908 11:58:13.683994  812547 filesync.go:149] local asset: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem -> 7523322.pem in /etc/ssl/certs
	I0908 11:58:13.684099  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:58:13.699967  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:58:13.734189  812547 start.go:296] duration metric: took 156.025518ms for postStartSetup
	I0908 11:58:13.734269  812547 fix.go:56] duration metric: took 24.4988088s for fixHost
	I0908 11:58:13.734304  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.737252  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.737721  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.737765  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.737924  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.738142  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.738352  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.738501  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.738672  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:13.738981  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:13.738998  812547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:58:13.847065  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757332693.825814239
	
	I0908 11:58:13.847088  812547 fix.go:216] guest clock: 1757332693.825814239
	I0908 11:58:13.847097  812547 fix.go:229] Guest: 2025-09-08 11:58:13.825814239 +0000 UTC Remote: 2025-09-08 11:58:13.734277311 +0000 UTC m=+30.887797732 (delta=91.536928ms)
	I0908 11:58:13.847137  812547 fix.go:200] guest clock delta is within tolerance: 91.536928ms
	I0908 11:58:13.847150  812547 start.go:83] releasing machines lock for "default-k8s-diff-port-149795", held for 24.611794175s
	I0908 11:58:13.847177  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.847472  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetIP
	I0908 11:58:13.850596  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.851119  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.851148  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.851259  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.851760  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.851935  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.852032  812547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:58:13.852080  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.852129  812547 ssh_runner.go:195] Run: cat /version.json
	I0908 11:58:13.852165  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.855506  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.856015  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.856048  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.856212  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.856295  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.856431  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.856585  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.856606  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.856611  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.856769  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:13.857124  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.857316  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.857506  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.857675  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:13.972413  812547 ssh_runner.go:195] Run: systemctl --version
	I0908 11:58:13.979046  812547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:58:14.131264  812547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 11:58:14.139064  812547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 11:58:14.139130  812547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:58:14.161596  812547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:58:14.161624  812547 start.go:495] detecting cgroup driver to use...
	I0908 11:58:14.161704  812547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:58:14.186106  812547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:58:14.206991  812547 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:58:14.207044  812547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:58:14.224613  812547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:58:14.240993  812547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:58:14.395059  812547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:58:14.544624  812547 docker.go:234] disabling docker service ...
	I0908 11:58:14.544705  812547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:58:14.561141  812547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:58:14.575967  812547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:58:14.792330  812547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:58:14.967740  812547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:58:14.985611  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:58:15.014490  812547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 11:58:15.014562  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.028896  812547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:58:15.028950  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.043193  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.055922  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.070269  812547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:58:15.084368  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.096945  812547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.122776  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.140358  812547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:58:15.151464  812547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:58:15.151540  812547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:58:15.175536  812547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:58:15.188179  812547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:58:15.351293  812547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:58:15.477833  812547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:58:15.477924  812547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:58:15.483478  812547 start.go:563] Will wait 60s for crictl version
	I0908 11:58:15.483545  812547 ssh_runner.go:195] Run: which crictl
	I0908 11:58:15.487576  812547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:58:15.531589  812547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 11:58:15.531724  812547 ssh_runner.go:195] Run: crio --version
	I0908 11:58:15.561931  812547 ssh_runner.go:195] Run: crio --version
	I0908 11:58:15.591994  812547 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 11:58:15.593170  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetIP
	I0908 11:58:15.595787  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:15.596129  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:15.596156  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:15.596409  812547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 11:58:15.601047  812547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:58:15.615387  812547 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-149795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netwo
rk: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:58:15.615514  812547 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:58:15.615556  812547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:58:15.654367  812547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0908 11:58:15.654438  812547 ssh_runner.go:195] Run: which lz4
	I0908 11:58:15.658898  812547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 11:58:15.664068  812547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 11:58:15.664125  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0908 11:58:17.338625  812547 crio.go:462] duration metric: took 1.679773351s to copy over tarball
	I0908 11:58:17.338725  812547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 11:58:18.982976  812547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.644214726s)
	I0908 11:58:18.983007  812547 crio.go:469] duration metric: took 1.644347643s to extract the tarball
	I0908 11:58:18.983016  812547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 11:58:19.023691  812547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:58:19.076722  812547 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:58:19.076766  812547 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:58:19.076778  812547 kubeadm.go:926] updating node { 192.168.39.109 8444 v1.34.0 crio true true} ...
	I0908 11:58:19.076916  812547 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-149795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:58:19.077003  812547 ssh_runner.go:195] Run: crio config
	I0908 11:58:19.126661  812547 cni.go:84] Creating CNI manager for ""
	I0908 11:58:19.126689  812547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:58:19.126704  812547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:58:19.126734  812547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-149795 NodeName:default-k8s-diff-port-149795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:58:19.126935  812547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-149795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.109"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:58:19.127023  812547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:58:19.139068  812547 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:58:19.139136  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:58:19.150689  812547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0908 11:58:19.170879  812547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:58:19.192958  812547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I0908 11:58:19.217642  812547 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I0908 11:58:19.221959  812547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:58:19.238620  812547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:58:19.396396  812547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:58:19.439673  812547 certs.go:68] Setting up /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795 for IP: 192.168.39.109
	I0908 11:58:19.439697  812547 certs.go:194] generating shared ca certs ...
	I0908 11:58:19.439714  812547 certs.go:226] acquiring lock for ca certs: {Name:mkaa8fe7cb1fe9bdb745b85589d42151c557e20e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:19.439877  812547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key
	I0908 11:58:19.439927  812547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key
	I0908 11:58:19.439943  812547 certs.go:256] generating profile certs ...
	I0908 11:58:19.440053  812547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/client.key
	I0908 11:58:19.440151  812547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/apiserver.key.0ed28a76
	I0908 11:58:19.440207  812547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/proxy-client.key
	I0908 11:58:19.440370  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem (1338 bytes)
	W0908 11:58:19.440412  812547 certs.go:480] ignoring /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332_empty.pem, impossibly tiny 0 bytes
	I0908 11:58:19.440426  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 11:58:19.440459  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem (1078 bytes)
	I0908 11:58:19.440488  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:58:19.440525  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem (1675 bytes)
	I0908 11:58:19.440584  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:58:19.441283  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:58:19.482073  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:58:19.515402  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:58:19.544994  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 11:58:19.573132  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0908 11:58:19.601356  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:58:19.629021  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:58:19.656705  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 11:58:19.684332  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /usr/share/ca-certificates/7523322.pem (1708 bytes)
	I0908 11:58:19.711799  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:58:19.738871  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem --> /usr/share/ca-certificates/752332.pem (1338 bytes)
	I0908 11:58:19.766478  812547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:58:19.785771  812547 ssh_runner.go:195] Run: openssl version
	I0908 11:58:19.791962  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:58:19.804523  812547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:58:19.809633  812547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:58:19.809703  812547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:58:19.816669  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:58:19.829968  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752332.pem && ln -fs /usr/share/ca-certificates/752332.pem /etc/ssl/certs/752332.pem"
	I0908 11:58:19.842724  812547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752332.pem
	I0908 11:58:19.847572  812547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:41 /usr/share/ca-certificates/752332.pem
	I0908 11:58:19.847629  812547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752332.pem
	I0908 11:58:19.854389  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752332.pem /etc/ssl/certs/51391683.0"
	I0908 11:58:19.867172  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7523322.pem && ln -fs /usr/share/ca-certificates/7523322.pem /etc/ssl/certs/7523322.pem"
	I0908 11:58:19.879993  812547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7523322.pem
	I0908 11:58:19.885001  812547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:41 /usr/share/ca-certificates/7523322.pem
	I0908 11:58:19.885048  812547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7523322.pem
	I0908 11:58:19.892243  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7523322.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:58:19.905223  812547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:58:19.910251  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 11:58:19.917394  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 11:58:19.924306  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 11:58:19.931255  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 11:58:19.938133  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 11:58:19.945452  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 11:58:19.952120  812547 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-149795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:58:19.952206  812547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:58:19.952253  812547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:58:19.992645  812547 cri.go:89] found id: ""
	I0908 11:58:19.992735  812547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:58:20.004767  812547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 11:58:20.004788  812547 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 11:58:20.004835  812547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 11:58:20.016636  812547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:58:20.017104  812547 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-149795" does not appear in /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:58:20.017241  812547 kubeconfig.go:62] /home/jenkins/minikube-integration/21503-748170/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-149795" cluster setting kubeconfig missing "default-k8s-diff-port-149795" context setting]
	I0908 11:58:20.019484  812547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:20.020729  812547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 11:58:20.032051  812547 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.109
	I0908 11:58:20.032090  812547 kubeadm.go:1152] stopping kube-system containers ...
	I0908 11:58:20.032104  812547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0908 11:58:20.032159  812547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:58:20.071745  812547 cri.go:89] found id: ""
	I0908 11:58:20.071812  812547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 11:58:20.090648  812547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:58:20.102580  812547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:58:20.102609  812547 kubeadm.go:157] found existing configuration files:
	
	I0908 11:58:20.102677  812547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0908 11:58:20.113717  812547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:58:20.113780  812547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:58:20.125456  812547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0908 11:58:20.135984  812547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:58:20.136051  812547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:58:20.147670  812547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0908 11:58:20.158731  812547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:58:20.158799  812547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:58:20.169704  812547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0908 11:58:20.180220  812547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:58:20.180281  812547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:58:20.192722  812547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:58:20.204082  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:20.259335  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:21.700749  812547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.441363109s)
	I0908 11:58:21.700803  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:21.935881  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:22.004530  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:22.080673  812547 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:58:22.080790  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:22.581458  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:23.081324  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:23.581927  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:23.617828  812547 api_server.go:72] duration metric: took 1.537159124s to wait for apiserver process to appear ...
	I0908 11:58:23.617858  812547 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:58:23.617884  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:23.618456  812547 api_server.go:269] stopped: https://192.168.39.109:8444/healthz: Get "https://192.168.39.109:8444/healthz": dial tcp 192.168.39.109:8444: connect: connection refused
	I0908 11:58:24.118130  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:26.273974  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:58:26.274003  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:58:26.274018  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:26.300983  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:58:26.301008  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:58:26.618533  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:26.623470  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:26.623497  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:27.118139  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:27.126489  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:27.126527  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:27.618153  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:27.625893  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:27.625929  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:28.118785  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:28.127432  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:28.127481  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:28.618109  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:28.622835  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 200:
	ok
	I0908 11:58:28.629247  812547 api_server.go:141] control plane version: v1.34.0
	I0908 11:58:28.629275  812547 api_server.go:131] duration metric: took 5.0114057s to wait for apiserver health ...
	I0908 11:58:28.629288  812547 cni.go:84] Creating CNI manager for ""
	I0908 11:58:28.629298  812547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:58:28.630982  812547 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 11:58:28.632061  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 11:58:28.644944  812547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 11:58:28.665882  812547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:58:28.671195  812547 system_pods.go:59] 8 kube-system pods found
	I0908 11:58:28.671246  812547 system_pods.go:61] "coredns-66bc5c9577-8bmsd" [31101ce9-d6dc-4f5b-ad19-555dc9e29a68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:28.671257  812547 system_pods.go:61] "etcd-default-k8s-diff-port-149795" [dfeca0dc-2ca7-4732-856f-426cbd0d7f0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:28.671265  812547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149795" [b8bade15-4ae8-461f-af77-cd65e48e34c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:28.671277  812547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149795" [2c6f4438-958a-4549-8c1d-98ac9429cf5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:28.671286  812547 system_pods.go:61] "kube-proxy-vmsg4" [91462068-fe67-4ff4-b9db-f7016960ab40] Running
	I0908 11:58:28.671299  812547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149795" [60f180e7-5cf2-487b-b6c8-fe985b5832a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:28.671307  812547 system_pods.go:61] "metrics-server-746fcd58dc-6hdsd" [c9e0e26f-f05a-4d6d-979b-711c4381d179] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:28.671317  812547 system_pods.go:61] "storage-provisioner" [0cb21d0b-e87b-4223-ab66-fb22e49c358a] Running
	I0908 11:58:28.671325  812547 system_pods.go:74] duration metric: took 5.418412ms to wait for pod list to return data ...
	I0908 11:58:28.671335  812547 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:58:28.677939  812547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:58:28.677969  812547 node_conditions.go:123] node cpu capacity is 2
	I0908 11:58:28.677984  812547 node_conditions.go:105] duration metric: took 6.639345ms to run NodePressure ...
	I0908 11:58:28.678005  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:28.934748  812547 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 11:58:28.938318  812547 kubeadm.go:735] kubelet initialised
	I0908 11:58:28.938338  812547 kubeadm.go:736] duration metric: took 3.563579ms waiting for restarted kubelet to initialise ...
	I0908 11:58:28.938355  812547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:58:28.954028  812547 ops.go:34] apiserver oom_adj: -16
	I0908 11:58:28.954068  812547 kubeadm.go:593] duration metric: took 8.949272474s to restartPrimaryControlPlane
	I0908 11:58:28.954080  812547 kubeadm.go:394] duration metric: took 9.001966386s to StartCluster
	I0908 11:58:28.954118  812547 settings.go:142] acquiring lock: {Name:mk18c67e9470bbfdfeaf7a5d3ce5d7a1813bc966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:28.954212  812547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:58:28.954815  812547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:28.955039  812547 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:58:28.955128  812547 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 11:58:28.955248  812547 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-149795"
	I0908 11:58:28.955269  812547 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-149795"
	W0908 11:58:28.955282  812547 addons.go:247] addon storage-provisioner should already be in state true
	I0908 11:58:28.955290  812547 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-149795"
	I0908 11:58:28.955301  812547 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-149795"
	I0908 11:58:28.955319  812547 host.go:66] Checking if "default-k8s-diff-port-149795" exists ...
	I0908 11:58:28.955310  812547 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-149795"
	I0908 11:58:28.955344  812547 config.go:182] Loaded profile config "default-k8s-diff-port-149795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:58:28.955362  812547 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-149795"
	W0908 11:58:28.955374  812547 addons.go:247] addon dashboard should already be in state true
	I0908 11:58:28.955411  812547 host.go:66] Checking if "default-k8s-diff-port-149795" exists ...
	I0908 11:58:28.955318  812547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-149795"
	I0908 11:58:28.955340  812547 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-149795"
	W0908 11:58:28.955584  812547 addons.go:247] addon metrics-server should already be in state true
	I0908 11:58:28.955609  812547 host.go:66] Checking if "default-k8s-diff-port-149795" exists ...
	I0908 11:58:28.955734  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.955764  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.955786  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.955837  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.955852  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.955885  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.956004  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.956049  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.957327  812547 out.go:179] * Verifying Kubernetes components...
	I0908 11:58:28.958547  812547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:58:28.971865  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0908 11:58:28.971874  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0908 11:58:28.972181  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38493
	I0908 11:58:28.972350  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.972368  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.972565  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.972809  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.972835  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.972984  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.973004  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.972990  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.973034  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.973240  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.973466  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.973489  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.973653  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:28.973853  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.973905  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.974062  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.974100  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.974916  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0908 11:58:28.975397  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.975910  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.975926  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.976301  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.976714  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.976744  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.976717  812547 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-149795"
	W0908 11:58:28.976820  812547 addons.go:247] addon default-storageclass should already be in state true
	I0908 11:58:28.976854  812547 host.go:66] Checking if "default-k8s-diff-port-149795" exists ...
	I0908 11:58:28.988936  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0908 11:58:28.989336  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.989713  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.989765  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.989837  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.989859  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.990234  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.990470  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:28.992342  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:28.992779  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36937
	I0908 11:58:28.993333  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.993844  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.993873  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.994135  812547 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 11:58:28.994237  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.994443  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:28.995212  812547 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 11:58:28.995234  812547 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 11:58:28.995254  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:28.996295  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:28.997602  812547 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:58:28.998603  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:28.998772  812547 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:58:28.998788  812547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:58:28.998807  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:28.999096  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:28.999118  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:28.999285  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:28.999462  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:28.999598  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:28.999765  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:29.002370  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.002836  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:29.002866  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.003029  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:29.003208  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:29.003387  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:29.003526  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:29.008162  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I0908 11:58:29.008693  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:29.009217  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:29.009244  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:29.009599  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:29.010166  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:29.010208  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:29.010651  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0908 11:58:29.011117  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:29.011609  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:29.011629  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:29.011905  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:29.012079  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:29.013931  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:29.015810  812547 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 11:58:29.016977  812547 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 11:58:29.017893  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 11:58:29.017915  812547 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 11:58:29.017938  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:29.020860  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.021318  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:29.021351  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.021621  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:29.021805  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:29.021972  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:29.022245  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:29.027796  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38233
	I0908 11:58:29.028199  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:29.028627  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:29.028648  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:29.029265  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:29.029462  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:29.030872  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:29.031091  812547 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:58:29.031107  812547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:58:29.031124  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:29.034015  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.034450  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:29.034479  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.034660  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:29.034817  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:29.035001  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:29.035143  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:29.229146  812547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:58:29.266019  812547 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-149795" to be "Ready" ...
	I0908 11:58:29.270153  812547 node_ready.go:49] node "default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:29.270177  812547 node_ready.go:38] duration metric: took 4.120803ms for node "default-k8s-diff-port-149795" to be "Ready" ...
	I0908 11:58:29.270191  812547 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:58:29.270237  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:29.338415  812547 api_server.go:72] duration metric: took 383.332533ms to wait for apiserver process to appear ...
	I0908 11:58:29.338456  812547 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:58:29.338482  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:29.348820  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 200:
	ok
	I0908 11:58:29.351010  812547 api_server.go:141] control plane version: v1.34.0
	I0908 11:58:29.351041  812547 api_server.go:131] duration metric: took 12.575791ms to wait for apiserver health ...
	I0908 11:58:29.351053  812547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:58:29.374273  812547 system_pods.go:59] 8 kube-system pods found
	I0908 11:58:29.374328  812547 system_pods.go:61] "coredns-66bc5c9577-8bmsd" [31101ce9-d6dc-4f5b-ad19-555dc9e29a68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:29.374344  812547 system_pods.go:61] "etcd-default-k8s-diff-port-149795" [dfeca0dc-2ca7-4732-856f-426cbd0d7f0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:29.374357  812547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149795" [b8bade15-4ae8-461f-af77-cd65e48e34c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:29.374369  812547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149795" [2c6f4438-958a-4549-8c1d-98ac9429cf5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:29.374376  812547 system_pods.go:61] "kube-proxy-vmsg4" [91462068-fe67-4ff4-b9db-f7016960ab40] Running
	I0908 11:58:29.374388  812547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149795" [60f180e7-5cf2-487b-b6c8-fe985b5832a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:29.374396  812547 system_pods.go:61] "metrics-server-746fcd58dc-6hdsd" [c9e0e26f-f05a-4d6d-979b-711c4381d179] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:29.374400  812547 system_pods.go:61] "storage-provisioner" [0cb21d0b-e87b-4223-ab66-fb22e49c358a] Running
	I0908 11:58:29.374409  812547 system_pods.go:74] duration metric: took 23.347252ms to wait for pod list to return data ...
	I0908 11:58:29.374419  812547 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:58:29.384255  812547 default_sa.go:45] found service account: "default"
	I0908 11:58:29.384285  812547 default_sa.go:55] duration metric: took 9.859516ms for default service account to be created ...
	I0908 11:58:29.384294  812547 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:58:29.404022  812547 system_pods.go:86] 8 kube-system pods found
	I0908 11:58:29.404098  812547 system_pods.go:89] "coredns-66bc5c9577-8bmsd" [31101ce9-d6dc-4f5b-ad19-555dc9e29a68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:29.404114  812547 system_pods.go:89] "etcd-default-k8s-diff-port-149795" [dfeca0dc-2ca7-4732-856f-426cbd0d7f0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:29.404130  812547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149795" [b8bade15-4ae8-461f-af77-cd65e48e34c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:29.404143  812547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149795" [2c6f4438-958a-4549-8c1d-98ac9429cf5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:29.404150  812547 system_pods.go:89] "kube-proxy-vmsg4" [91462068-fe67-4ff4-b9db-f7016960ab40] Running
	I0908 11:58:29.404160  812547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149795" [60f180e7-5cf2-487b-b6c8-fe985b5832a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:29.404175  812547 system_pods.go:89] "metrics-server-746fcd58dc-6hdsd" [c9e0e26f-f05a-4d6d-979b-711c4381d179] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:29.404182  812547 system_pods.go:89] "storage-provisioner" [0cb21d0b-e87b-4223-ab66-fb22e49c358a] Running
	I0908 11:58:29.404194  812547 system_pods.go:126] duration metric: took 19.89185ms to wait for k8s-apps to be running ...
	I0908 11:58:29.404208  812547 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:58:29.404264  812547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:58:29.406926  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 11:58:29.406952  812547 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 11:58:29.417366  812547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:58:29.428033  812547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:58:29.475974  812547 system_svc.go:56] duration metric: took 71.758039ms WaitForService to wait for kubelet
	I0908 11:58:29.476005  812547 kubeadm.go:578] duration metric: took 520.932705ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:58:29.476023  812547 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:58:29.487222  812547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:58:29.487250  812547 node_conditions.go:123] node cpu capacity is 2
	I0908 11:58:29.487260  812547 node_conditions.go:105] duration metric: took 11.232529ms to run NodePressure ...
	I0908 11:58:29.487272  812547 start.go:241] waiting for startup goroutines ...
	I0908 11:58:29.498094  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 11:58:29.498126  812547 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 11:58:29.574478  812547 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 11:58:29.574506  812547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 11:58:29.629606  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 11:58:29.629644  812547 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 11:58:29.662865  812547 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 11:58:29.662906  812547 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 11:58:29.720290  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 11:58:29.720319  812547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 11:58:29.733183  812547 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:58:29.733214  812547 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 11:58:29.781759  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 11:58:29.781806  812547 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 11:58:29.806631  812547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:58:29.850357  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 11:58:29.850399  812547 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 11:58:29.922320  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 11:58:29.922357  812547 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 11:58:29.980722  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 11:58:29.980835  812547 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 11:58:30.031626  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:58:30.031662  812547 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 11:58:30.070327  812547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:58:31.096390  812547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.668318659s)
	I0908 11:58:31.096454  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.096470  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.096824  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.096843  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.096855  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.096823  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.096861  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.097169  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.097191  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.097190  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.098919  812547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.681522913s)
	I0908 11:58:31.098952  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.098964  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.099250  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.099270  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.099282  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.099293  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.099303  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.099539  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.099559  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.099581  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.135776  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.135799  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.136173  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.136198  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.295619  812547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.488930548s)
	I0908 11:58:31.295702  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.295724  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.296071  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.296139  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.296148  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.296161  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.296169  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.296434  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.296452  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.296464  812547 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-149795"
	I0908 11:58:31.296487  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.732140  812547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.661704525s)
	I0908 11:58:31.732218  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.732238  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.732701  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.732720  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.732743  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.732785  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.732846  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.733100  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.733118  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.734877  812547 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-149795 addons enable metrics-server
	
	I0908 11:58:31.736134  812547 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 11:58:31.737368  812547 addons.go:514] duration metric: took 2.782255255s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 11:58:31.737411  812547 start.go:246] waiting for cluster config update ...
	I0908 11:58:31.737423  812547 start.go:255] writing updated cluster config ...
	I0908 11:58:31.737650  812547 ssh_runner.go:195] Run: rm -f paused
	I0908 11:58:31.743845  812547 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:58:31.750592  812547 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8bmsd" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 11:58:33.756566  812547 pod_ready.go:104] pod "coredns-66bc5c9577-8bmsd" is not "Ready", error: <nil>
	W0908 11:58:35.757629  812547 pod_ready.go:104] pod "coredns-66bc5c9577-8bmsd" is not "Ready", error: <nil>
	W0908 11:58:38.262814  812547 pod_ready.go:104] pod "coredns-66bc5c9577-8bmsd" is not "Ready", error: <nil>
	I0908 11:58:40.757349  812547 pod_ready.go:94] pod "coredns-66bc5c9577-8bmsd" is "Ready"
	I0908 11:58:40.757390  812547 pod_ready.go:86] duration metric: took 9.006768043s for pod "coredns-66bc5c9577-8bmsd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:40.760045  812547 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:40.764175  812547 pod_ready.go:94] pod "etcd-default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:40.764200  812547 pod_ready.go:86] duration metric: took 4.124516ms for pod "etcd-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:40.767140  812547 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:41.773282  812547 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:41.773309  812547 pod_ready.go:86] duration metric: took 1.006147457s for pod "kube-apiserver-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:41.776497  812547 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:41.781897  812547 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:41.781921  812547 pod_ready.go:86] duration metric: took 5.395768ms for pod "kube-controller-manager-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:41.956083  812547 pod_ready.go:83] waiting for pod "kube-proxy-vmsg4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:42.355763  812547 pod_ready.go:94] pod "kube-proxy-vmsg4" is "Ready"
	I0908 11:58:42.355797  812547 pod_ready.go:86] duration metric: took 399.683912ms for pod "kube-proxy-vmsg4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:42.555394  812547 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:42.955123  812547 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:42.955153  812547 pod_ready.go:86] duration metric: took 399.731995ms for pod "kube-scheduler-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:42.955166  812547 pod_ready.go:40] duration metric: took 11.211288623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:58:42.998388  812547 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:58:43.000070  812547 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-149795" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.516746598Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d9971d1b61c2b240442e860d28c75ed1876d6b74546e9ae4d1caca122e147b43,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-6ffb444bf9-r9vzn,Uid:f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332711859326856,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-r9vzn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7,k8s-app: dashboard-metrics-scraper,pod-template-hash: 6ffb444bf9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T11:58:31.521925691Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:773bf4ff8e7c7a71f15bd803b79b8257631d0b838d
e0c95590f15229a310f7ca,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-h5hcp,Uid:d20477db-7399-4b1f-ad64-6cfa0fb34d60,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332711831287728,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-h5hcp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d20477db-7399-4b1f-ad64-6cfa0fb34d60,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T11:58:31.502006160Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:564fc335152637623fee614ca3c64e414252c6befca259213154629956993fd0,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-8bmsd,Uid:31101ce9-d6dc-4f5b-ad19-555dc9e29a68,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332710965156761,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernet
es.pod.name: coredns-66bc5c9577-8bmsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31101ce9-d6dc-4f5b-ad19-555dc9e29a68,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T11:58:27.027501541Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d8bf6751a128d66acf89bc3aa31bac502c9ee3f9d5a79899995d52697862f0a,Metadata:&PodSandboxMetadata{Name:busybox,Uid:f7309204-a2be-4cc0-a01b-de13b6afd01e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332710945047160,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7309204-a2be-4cc0-a01b-de13b6afd01e,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T11:58:27.027506896Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3094b5c5cdeaf4044277d5c04a81b8b9f32cabf4492e9705ae25fffb4d4be1f,Metadata:&PodS
andboxMetadata{Name:metrics-server-746fcd58dc-6hdsd,Uid:c9e0e26f-f05a-4d6d-979b-711c4381d179,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332709119928566,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-746fcd58dc-6hdsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e0e26f-f05a-4d6d-979b-711c4381d179,k8s-app: metrics-server,pod-template-hash: 746fcd58dc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T11:58:27.027504934Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0cb21d0b-e87b-4223-ab66-fb22e49c358a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332707354760442,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-pro
visioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-08T11:58:27.027505806Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e9bb07b59271317bdf542b1520014ed4419ff83229a
0b31f45558efa466ad57,Metadata:&PodSandboxMetadata{Name:kube-proxy-vmsg4,Uid:91462068-fe67-4ff4-b9db-f7016960ab40,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332707347197766,Labels:map[string]string{controller-revision-hash: 6f475c7966,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vmsg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91462068-fe67-4ff4-b9db-f7016960ab40,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T11:58:27.027495757Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3cf30400a1f0bfe236266236c4096dad440b5bddd406c6efaa1ecc781decf975,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-149795,Uid:1d198d64ccda796e844cc7692cb87e41,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332703185001307,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-1497
95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d198d64ccda796e844cc7692cb87e41,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: 1d198d64ccda796e844cc7692cb87e41,kubernetes.io/config.seen: 2025-09-08T11:58:22.074752988Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a243efcf1b53413fb9c3dcce13b873c7ad6de31fa9ab9524f541e34a44d2f3ff,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-149795,Uid:512eeffaafa40f337891a4fc086eef59,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332703183471919,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512eeffaafa40f337891a4fc086eef59,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 512eeffaafa40f
337891a4fc086eef59,kubernetes.io/config.seen: 2025-09-08T11:58:22.029737531Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:005462b99d1e169d956b9dadfadd9eb59f72c050155b197c0cc1128de57e543c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-149795,Uid:f109fa0cc69fc770844283f79b5fed2c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332703171934536,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109fa0cc69fc770844283f79b5fed2c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f109fa0cc69fc770844283f79b5fed2c,kubernetes.io/config.seen: 2025-09-08T11:58:22.029736680Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:165028a9051cd2b786719b418a4b005bbdd2e13a735c5e98b3072dc90b72ff57,Metadata:&PodSandboxMetadata{Name
:kube-apiserver-default-k8s-diff-port-149795,Uid:a999c546c3cf243b5bc764b1c7bcc19d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1757332703160213010,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a999c546c3cf243b5bc764b1c7bcc19d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.109:8444,kubernetes.io/config.hash: a999c546c3cf243b5bc764b1c7bcc19d,kubernetes.io/config.seen: 2025-09-08T11:58:22.029733355Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=da262d30-3e34-4d4a-ab13-7da89df49eff name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.518119018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8f16f3f-8408-4900-b686-808f5802d9e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.518184914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8f16f3f-8408-4900-b686-808f5802d9e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.518390328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:329ed0687e7c9bcd686ced15a67d7e350b63a28bc0c207c9ad1905859fb615fc,PodSandboxId:d9971d1b61c2b240442e860d28c75ed1876d6b74546e9ae4d1caca122e147b43,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1757333092121720619,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-r9vzn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2295a57e5d0f147bbdd47cb07012fadbe3fa31f4466b20fc874a981f413654bd,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757332738346959970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c944c5685dcbe3453a0762636a7e0bf9fb8fd84df73ff41e3f5354998844c36d,PodSandboxId:5d8bf6751a128d66acf89bc3aa31bac502c9ee3f9d5a79899995d52697862f0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757332717948289872,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7309204-a2be-4cc0-a01b-de13b6afd01e,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cc782e0ec2248d9b723af4f7a4aa589befe26da4d9ba49c275cdca6f74dec7,PodSandboxId:564fc335152637623fee614ca3c64e414252c6befca259213154629956993fd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757332711698979676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8bmsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31101ce9-d6dc-4f5b-ad19-555dc9e29a68,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:049e2bd82da59a081bdd6cc45be2ff080f311ffc832f781396eb9328ed93c742,PodSandboxId:5e9bb07b59271317bdf542b1520014ed4419ff83229a0b31f45558efa466ad57,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757332707597457007,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vmsg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91462068-fe67-4ff4-b9db-f7016960ab40,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d8c38b6064ace141c9fc470297bdad1b46cbfec17b7ed88917f4ed73e3f238,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1757332707569581970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132e0611e671809fe2004db5b204ebb98d88547afad9f17936178a3a61691d1e,PodSandboxId:005462b99d1e169d956b9dadfadd9eb59f72c050155b197c0cc1128de57e543c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1757332703435396044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109fa0cc69fc770844283f79b5fed2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e77a34bd3a0a4783768c9af6e275ac7573ac7ddf7e9ee8566e830d8fd7e512f,PodSandboxId:165028a9051cd2b786719b418a4b005bbdd2e13a735c5e98b3072dc90b72ff57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757332703392199783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a999c546c3cf243b5bc764b1c7bcc19d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c01a55b26f98d659cb84f0e01d507b4bbbb7a4657effe5cfee821bff3e8fca7,PodSandboxId:3cf30400a1f0bfe236266236c4096dad440b5bddd406c6efaa1ecc781decf975,Metadata:&ContainerMetadata{Name:etcd,Attempt
:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757332703383061482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d198d64ccda796e844cc7692cb87e41,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c17ee824d7f491d7c07c374ca2205434be4cf56242b857e2ad06e9f30a03ab,PodSandboxId:a243efcf1b53413fb9c3d
cce13b873c7ad6de31fa9ab9524f541e34a44d2f3ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757332703369716881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512eeffaafa40f337891a4fc086eef59,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=c8f16f3f-8408-4900-b686-808f5802d9e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.545715550Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3311ee0a-69c5-4621-9adb-d3d1f3cf2195 name=/runtime.v1.RuntimeService/Version
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.545862445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3311ee0a-69c5-4621-9adb-d3d1f3cf2195 name=/runtime.v1.RuntimeService/Version
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.549604068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3bcf4ca-78b3-45a4-a950-d34101e099ca name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.550172380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757333264550150614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3bcf4ca-78b3-45a4-a950-d34101e099ca name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.550620678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63674c8f-ffd9-47cd-9ade-8870327bbb6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.550669259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63674c8f-ffd9-47cd-9ade-8870327bbb6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.551106919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:329ed0687e7c9bcd686ced15a67d7e350b63a28bc0c207c9ad1905859fb615fc,PodSandboxId:d9971d1b61c2b240442e860d28c75ed1876d6b74546e9ae4d1caca122e147b43,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1757333092121720619,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-r9vzn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2295a57e5d0f147bbdd47cb07012fadbe3fa31f4466b20fc874a981f413654bd,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757332738346959970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c944c5685dcbe3453a0762636a7e0bf9fb8fd84df73ff41e3f5354998844c36d,PodSandboxId:5d8bf6751a128d66acf89bc3aa31bac502c9ee3f9d5a79899995d52697862f0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757332717948289872,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7309204-a2be-4cc0-a01b-de13b6afd01e,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cc782e0ec2248d9b723af4f7a4aa589befe26da4d9ba49c275cdca6f74dec7,PodSandboxId:564fc335152637623fee614ca3c64e414252c6befca259213154629956993fd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757332711698979676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8bmsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31101ce9-d6dc-4f5b-ad19-555dc9e29a68,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:049e2bd82da59a081bdd6cc45be2ff080f311ffc832f781396eb9328ed93c742,PodSandboxId:5e9bb07b59271317bdf542b1520014ed4419ff83229a0b31f45558efa466ad57,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757332707597457007,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vmsg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91462068-fe67-4ff4-b9db-f7016960ab40,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d8c38b6064ace141c9fc470297bdad1b46cbfec17b7ed88917f4ed73e3f238,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1757332707569581970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132e0611e671809fe2004db5b204ebb98d88547afad9f17936178a3a61691d1e,PodSandboxId:005462b99d1e169d956b9dadfadd9eb59f72c050155b197c0cc1128de57e543c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1757332703435396044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109fa0cc69fc770844283f79b5fed2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e77a34bd3a0a4783768c9af6e275ac7573ac7ddf7e9ee8566e830d8fd7e512f,PodSandboxId:165028a9051cd2b786719b418a4b005bbdd2e13a735c5e98b3072dc90b72ff57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757332703392199783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a999c546c3cf243b5bc764b1c7bcc19d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c01a55b26f98d659cb84f0e01d507b4bbbb7a4657effe5cfee821bff3e8fca7,PodSandboxId:3cf30400a1f0bfe236266236c4096dad440b5bddd406c6efaa1ecc781decf975,Metadata:&ContainerMetadata{Name:etcd,Attempt
:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757332703383061482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d198d64ccda796e844cc7692cb87e41,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c17ee824d7f491d7c07c374ca2205434be4cf56242b857e2ad06e9f30a03ab,PodSandboxId:a243efcf1b53413fb9c3d
cce13b873c7ad6de31fa9ab9524f541e34a44d2f3ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757332703369716881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512eeffaafa40f337891a4fc086eef59,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=63674c8f-ffd9-47cd-9ade-8870327bbb6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.593708115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84952de1-a5b5-49a8-95e9-655c4f1e20a4 name=/runtime.v1.RuntimeService/Version
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.593803370Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84952de1-a5b5-49a8-95e9-655c4f1e20a4 name=/runtime.v1.RuntimeService/Version
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.595135206Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05fed189-960a-484c-86ac-3cd5d64921ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.595884624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757333264595800330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05fed189-960a-484c-86ac-3cd5d64921ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.597158493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2818b015-c406-43ff-a889-7888cb3e92c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.597301173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2818b015-c406-43ff-a889-7888cb3e92c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.597862931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:329ed0687e7c9bcd686ced15a67d7e350b63a28bc0c207c9ad1905859fb615fc,PodSandboxId:d9971d1b61c2b240442e860d28c75ed1876d6b74546e9ae4d1caca122e147b43,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1757333092121720619,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-r9vzn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2295a57e5d0f147bbdd47cb07012fadbe3fa31f4466b20fc874a981f413654bd,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757332738346959970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c944c5685dcbe3453a0762636a7e0bf9fb8fd84df73ff41e3f5354998844c36d,PodSandboxId:5d8bf6751a128d66acf89bc3aa31bac502c9ee3f9d5a79899995d52697862f0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757332717948289872,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7309204-a2be-4cc0-a01b-de13b6afd01e,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cc782e0ec2248d9b723af4f7a4aa589befe26da4d9ba49c275cdca6f74dec7,PodSandboxId:564fc335152637623fee614ca3c64e414252c6befca259213154629956993fd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757332711698979676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8bmsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31101ce9-d6dc-4f5b-ad19-555dc9e29a68,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:049e2bd82da59a081bdd6cc45be2ff080f311ffc832f781396eb9328ed93c742,PodSandboxId:5e9bb07b59271317bdf542b1520014ed4419ff83229a0b31f45558efa466ad57,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757332707597457007,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vmsg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91462068-fe67-4ff4-b9db-f7016960ab40,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d8c38b6064ace141c9fc470297bdad1b46cbfec17b7ed88917f4ed73e3f238,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1757332707569581970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132e0611e671809fe2004db5b204ebb98d88547afad9f17936178a3a61691d1e,PodSandboxId:005462b99d1e169d956b9dadfadd9eb59f72c050155b197c0cc1128de57e543c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1757332703435396044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109fa0cc69fc770844283f79b5fed2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e77a34bd3a0a4783768c9af6e275ac7573ac7ddf7e9ee8566e830d8fd7e512f,PodSandboxId:165028a9051cd2b786719b418a4b005bbdd2e13a735c5e98b3072dc90b72ff57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757332703392199783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a999c546c3cf243b5bc764b1c7bcc19d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c01a55b26f98d659cb84f0e01d507b4bbbb7a4657effe5cfee821bff3e8fca7,PodSandboxId:3cf30400a1f0bfe236266236c4096dad440b5bddd406c6efaa1ecc781decf975,Metadata:&ContainerMetadata{Name:etcd,Attempt
:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757332703383061482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d198d64ccda796e844cc7692cb87e41,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c17ee824d7f491d7c07c374ca2205434be4cf56242b857e2ad06e9f30a03ab,PodSandboxId:a243efcf1b53413fb9c3d
cce13b873c7ad6de31fa9ab9524f541e34a44d2f3ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757332703369716881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512eeffaafa40f337891a4fc086eef59,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=2818b015-c406-43ff-a889-7888cb3e92c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.639062253Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb5c9d3c-c82b-4234-aea2-ebedfcecd2e4 name=/runtime.v1.RuntimeService/Version
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.639330121Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb5c9d3c-c82b-4234-aea2-ebedfcecd2e4 name=/runtime.v1.RuntimeService/Version
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.640547505Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=862f2db0-5e8a-4c1b-b57e-65f389bc4149 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.641041333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757333264641018746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=862f2db0-5e8a-4c1b-b57e-65f389bc4149 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.641655978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f436347-bfbb-4bdc-8fc7-c3cf1f43609c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.641950029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f436347-bfbb-4bdc-8fc7-c3cf1f43609c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:07:44 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:07:44.642521641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:329ed0687e7c9bcd686ced15a67d7e350b63a28bc0c207c9ad1905859fb615fc,PodSandboxId:d9971d1b61c2b240442e860d28c75ed1876d6b74546e9ae4d1caca122e147b43,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1757333092121720619,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-r9vzn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2295a57e5d0f147bbdd47cb07012fadbe3fa31f4466b20fc874a981f413654bd,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757332738346959970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c944c5685dcbe3453a0762636a7e0bf9fb8fd84df73ff41e3f5354998844c36d,PodSandboxId:5d8bf6751a128d66acf89bc3aa31bac502c9ee3f9d5a79899995d52697862f0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757332717948289872,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7309204-a2be-4cc0-a01b-de13b6afd01e,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cc782e0ec2248d9b723af4f7a4aa589befe26da4d9ba49c275cdca6f74dec7,PodSandboxId:564fc335152637623fee614ca3c64e414252c6befca259213154629956993fd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757332711698979676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8bmsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31101ce9-d6dc-4f5b-ad19-555dc9e29a68,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:049e2bd82da59a081bdd6cc45be2ff080f311ffc832f781396eb9328ed93c742,PodSandboxId:5e9bb07b59271317bdf542b1520014ed4419ff83229a0b31f45558efa466ad57,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757332707597457007,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vmsg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91462068-fe67-4ff4-b9db-f7016960ab40,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d8c38b6064ace141c9fc470297bdad1b46cbfec17b7ed88917f4ed73e3f238,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1757332707569581970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132e0611e671809fe2004db5b204ebb98d88547afad9f17936178a3a61691d1e,PodSandboxId:005462b99d1e169d956b9dadfadd9eb59f72c050155b197c0cc1128de57e543c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1757332703435396044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109fa0cc69fc770844283f79b5fed2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e77a34bd3a0a4783768c9af6e275ac7573ac7ddf7e9ee8566e830d8fd7e512f,PodSandboxId:165028a9051cd2b786719b418a4b005bbdd2e13a735c5e98b3072dc90b72ff57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757332703392199783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a999c546c3cf243b5bc764b1c7bcc19d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c01a55b26f98d659cb84f0e01d507b4bbbb7a4657effe5cfee821bff3e8fca7,PodSandboxId:3cf30400a1f0bfe236266236c4096dad440b5bddd406c6efaa1ecc781decf975,Metadata:&ContainerMetadata{Name:etcd,Attempt
:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757332703383061482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d198d64ccda796e844cc7692cb87e41,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c17ee824d7f491d7c07c374ca2205434be4cf56242b857e2ad06e9f30a03ab,PodSandboxId:a243efcf1b53413fb9c3d
cce13b873c7ad6de31fa9ab9524f541e34a44d2f3ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757332703369716881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512eeffaafa40f337891a4fc086eef59,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=5f436347-bfbb-4bdc-8fc7-c3cf1f43609c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	329ed0687e7c9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      2 minutes ago       Exited              dashboard-metrics-scraper   6                   d9971d1b61c2b       dashboard-metrics-scraper-6ffb444bf9-r9vzn
	2295a57e5d0f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner         2                   e6e807891e561       storage-provisioner
	c944c5685dcbe       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                     1                   5d8bf6751a128       busybox
	f6cc782e0ec22       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      9 minutes ago       Running             coredns                     1                   564fc33515263       coredns-66bc5c9577-8bmsd
	049e2bd82da59       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      9 minutes ago       Running             kube-proxy                  1                   5e9bb07b59271       kube-proxy-vmsg4
	c1d8c38b6064a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner         1                   e6e807891e561       storage-provisioner
	132e0611e6718       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      9 minutes ago       Running             kube-controller-manager     1                   005462b99d1e1       kube-controller-manager-default-k8s-diff-port-149795
	5e77a34bd3a0a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      9 minutes ago       Running             kube-apiserver              1                   165028a9051cd       kube-apiserver-default-k8s-diff-port-149795
	3c01a55b26f98       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      9 minutes ago       Running             etcd                        1                   3cf30400a1f0b       etcd-default-k8s-diff-port-149795
	34c17ee824d7f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      9 minutes ago       Running             kube-scheduler              1                   a243efcf1b534       kube-scheduler-default-k8s-diff-port-149795
	
	
	==> coredns [f6cc782e0ec2248d9b723af4f7a4aa589befe26da4d9ba49c275cdca6f74dec7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57732 - 1757 "HINFO IN 5940651371093740128.8074940744283301137. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012353843s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-149795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-149795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2
	                    minikube.k8s.io/name=default-k8s-diff-port-149795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_55_16_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:55:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-149795
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:07:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:03:44 +0000   Mon, 08 Sep 2025 11:55:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:03:44 +0000   Mon, 08 Sep 2025 11:55:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:03:44 +0000   Mon, 08 Sep 2025 11:55:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:03:44 +0000   Mon, 08 Sep 2025 11:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    default-k8s-diff-port-149795
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 dbabe12d88764d91a3177cf0fdd6c78d
	  System UUID:                dbabe12d-8876-4d91-a317-7cf0fdd6c78d
	  Boot ID:                    c7544f21-1a6f-4746-bab2-28225f8275e1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-8bmsd                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-default-k8s-diff-port-149795                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-149795             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-149795    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vmsg4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-149795             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-6hdsd                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-r9vzn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-h5hcp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m16s                  kube-proxy       
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                    kubelet          Node default-k8s-diff-port-149795 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node default-k8s-diff-port-149795 event: Registered Node default-k8s-diff-port-149795 in Controller
	  Normal   Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m18s                  kubelet          Node default-k8s-diff-port-149795 has been rebooted, boot id: c7544f21-1a6f-4746-bab2-28225f8275e1
	  Normal   RegisteredNode           9m14s                  node-controller  Node default-k8s-diff-port-149795 event: Registered Node default-k8s-diff-port-149795 in Controller
	
	
	==> dmesg <==
	[Sep 8 11:57] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001847] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep 8 11:58] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.715268] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085475] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.099527] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.532355] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.731566] kauditd_printk_skb: 335 callbacks suppressed
	[ +20.399004] kauditd_printk_skb: 11 callbacks suppressed
	[Sep 8 11:59] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.063896] kauditd_printk_skb: 55 callbacks suppressed
	[ +20.688334] kauditd_printk_skb: 6 callbacks suppressed
	[Sep 8 12:00] kauditd_printk_skb: 6 callbacks suppressed
	[Sep 8 12:02] kauditd_printk_skb: 6 callbacks suppressed
	[Sep 8 12:04] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [3c01a55b26f98d659cb84f0e01d507b4bbbb7a4657effe5cfee821bff3e8fca7] <==
	{"level":"warn","ts":"2025-09-08T11:58:25.419412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.432250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.443800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.455246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.469159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.481731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.494997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.506011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.521218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.528564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.538705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.554909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.560677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.569574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.589913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.607163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.608632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.624423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.626865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.636175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.649758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.664133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.671041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.723080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60666","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:59:48.145933Z","caller":"traceutil/trace.go:172","msg":"trace[1094567219] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"110.759246ms","start":"2025-09-08T11:59:48.035152Z","end":"2025-09-08T11:59:48.145911Z","steps":["trace[1094567219] 'process raft request'  (duration: 110.536973ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:07:44 up 9 min,  0 users,  load average: 0.38, 0.51, 0.26
	Linux default-k8s-diff-port-149795 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5e77a34bd3a0a4783768c9af6e275ac7573ac7ddf7e9ee8566e830d8fd7e512f] <==
	I0908 12:03:42.653580       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:04:13.807887       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:04:27.340547       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:04:27.340637       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:04:27.340648       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:04:27.340795       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:04:27.340869       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:04:27.342669       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:04:55.504871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:05:16.664026       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:05:59.308508       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:06:27.341574       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:06:27.341667       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:06:27.341677       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:06:27.342892       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:06:27.342916       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:06:27.342926       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:06:40.559586       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:07:09.721579       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [132e0611e671809fe2004db5b204ebb98d88547afad9f17936178a3a61691d1e] <==
	I0908 12:01:30.979442       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:02:00.915123       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:02:00.988969       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:02:30.920497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:02:30.996686       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:03:00.925704       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:03:01.006373       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:03:30.930567       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:03:31.014887       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:04:00.935495       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:04:01.022391       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:04:30.940301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:04:31.030517       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:05:00.944882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:05:01.039583       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:05:30.950404       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:05:31.047497       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:06:00.955371       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:06:01.054731       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:06:30.961214       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:06:31.061975       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:07:00.967650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:07:01.068566       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:07:30.973329       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:07:31.077035       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [049e2bd82da59a081bdd6cc45be2ff080f311ffc832f781396eb9328ed93c742] <==
	I0908 11:58:27.777159       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:58:27.877734       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:58:27.877812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.109"]
	E0908 11:58:27.877951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:58:27.913409       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 11:58:27.913526       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 11:58:27.913634       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:58:27.923051       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:58:27.923362       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:58:27.923405       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:58:27.931990       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:58:27.932029       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:58:27.932132       1 config.go:200] "Starting service config controller"
	I0908 11:58:27.932158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:58:27.932170       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:58:27.932174       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:58:27.933575       1 config.go:309] "Starting node config controller"
	I0908 11:58:27.933903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:58:27.933943       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:58:28.032896       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:58:28.032988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:58:28.032999       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [34c17ee824d7f491d7c07c374ca2205434be4cf56242b857e2ad06e9f30a03ab] <==
	I0908 11:58:24.454322       1 serving.go:386] Generated self-signed cert in-memory
	W0908 11:58:26.307209       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 11:58:26.307284       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 11:58:26.308880       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 11:58:26.308933       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 11:58:26.376503       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:58:26.376573       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:58:26.382583       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:58:26.382692       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:58:26.384522       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:58:26.384600       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:58:26.482989       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:07:02 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:02.224436    1202 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757333222223806642  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:07:02 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:02.224481    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757333222223806642  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:07:03 default-k8s-diff-port-149795 kubelet[1202]: I0908 12:07:03.106365    1202 scope.go:117] "RemoveContainer" containerID="329ed0687e7c9bcd686ced15a67d7e350b63a28bc0c207c9ad1905859fb615fc"
	Sep 08 12:07:03 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:03.106576    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r9vzn_kubernetes-dashboard(f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r9vzn" podUID="f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7"
	Sep 08 12:07:05 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:05.107777    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp" podUID="d20477db-7399-4b1f-ad64-6cfa0fb34d60"
	Sep 08 12:07:07 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:07.107705    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6hdsd" podUID="c9e0e26f-f05a-4d6d-979b-711c4381d179"
	Sep 08 12:07:12 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:12.225900    1202 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757333232225582852  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:07:12 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:12.225925    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757333232225582852  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:07:14 default-k8s-diff-port-149795 kubelet[1202]: I0908 12:07:14.106729    1202 scope.go:117] "RemoveContainer" containerID="329ed0687e7c9bcd686ced15a67d7e350b63a28bc0c207c9ad1905859fb615fc"
	Sep 08 12:07:14 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:14.106940    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r9vzn_kubernetes-dashboard(f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r9vzn" podUID="f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7"
	Sep 08 12:07:16 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:16.112547    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp" podUID="d20477db-7399-4b1f-ad64-6cfa0fb34d60"
	Sep 08 12:07:20 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:20.109711    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6hdsd" podUID="c9e0e26f-f05a-4d6d-979b-711c4381d179"
	Sep 08 12:07:22 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:22.227153    1202 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757333242226919517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:07:22 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:22.227197    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757333242226919517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:07:28 default-k8s-diff-port-149795 kubelet[1202]: I0908 12:07:28.105977    1202 scope.go:117] "RemoveContainer" containerID="329ed0687e7c9bcd686ced15a67d7e350b63a28bc0c207c9ad1905859fb615fc"
	Sep 08 12:07:28 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:28.106130    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r9vzn_kubernetes-dashboard(f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r9vzn" podUID="f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7"
	Sep 08 12:07:28 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:28.109270    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp" podUID="d20477db-7399-4b1f-ad64-6cfa0fb34d60"
	Sep 08 12:07:32 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:32.228559    1202 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757333252228258600  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:07:32 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:32.228601    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757333252228258600  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:07:34 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:34.109506    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6hdsd" podUID="c9e0e26f-f05a-4d6d-979b-711c4381d179"
	Sep 08 12:07:39 default-k8s-diff-port-149795 kubelet[1202]: I0908 12:07:39.105705    1202 scope.go:117] "RemoveContainer" containerID="329ed0687e7c9bcd686ced15a67d7e350b63a28bc0c207c9ad1905859fb615fc"
	Sep 08 12:07:39 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:39.105901    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r9vzn_kubernetes-dashboard(f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r9vzn" podUID="f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7"
	Sep 08 12:07:41 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:41.107977    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp" podUID="d20477db-7399-4b1f-ad64-6cfa0fb34d60"
	Sep 08 12:07:42 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:42.230287    1202 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757333262230014712  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:07:42 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:07:42.230328    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757333262230014712  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	
	
	==> storage-provisioner [2295a57e5d0f147bbdd47cb07012fadbe3fa31f4466b20fc874a981f413654bd] <==
	W0908 12:07:20.480480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:22.484466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:22.490758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:24.494781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:24.503667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:26.507209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:26.512515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:28.516255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:28.522002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:30.525928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:30.531139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:32.535756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:32.545094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:34.548262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:34.552994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:36.556981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:36.565124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:38.568726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:38.573253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:40.577016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:40.586922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:42.590460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:42.595455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:44.600614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:07:44.606955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c1d8c38b6064ace141c9fc470297bdad1b46cbfec17b7ed88917f4ed73e3f238] <==
	I0908 11:58:27.676366       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 11:58:57.679271       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-6hdsd kubernetes-dashboard-855c9754f9-h5hcp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 describe pod metrics-server-746fcd58dc-6hdsd kubernetes-dashboard-855c9754f9-h5hcp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-149795 describe pod metrics-server-746fcd58dc-6hdsd kubernetes-dashboard-855c9754f9-h5hcp: exit status 1 (58.4839ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-6hdsd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-h5hcp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-149795 describe pod metrics-server-746fcd58dc-6hdsd kubernetes-dashboard-855c9754f9-h5hcp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h5hcp" [d20477db-7399-4b1f-ad64-6cfa0fb34d60] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 12:08:11.608083  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:08:20.415674  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/old-k8s-version-073517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:08:30.882386  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:08:52.932032  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:08:53.526815  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:08:56.712069  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:08:59.432441  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:09:42.603989  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:10:16.593825  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:10:22.497381  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:11:04.562613  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:11:39.509686  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:12:25.365223  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:12:27.627912  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:02.571749  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:11.608060  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:20.415582  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/old-k8s-version-073517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:30.882452  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:48.430487  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:52.931499  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:53.527579  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:56.711904  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:59.432234  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:34.672321  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:42.603489  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:43.478148  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/old-k8s-version-073517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:15.995817  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/calico-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:04.563273  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:05.669610  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/no-preload-474007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:39.509756  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-08 12:16:46.079108601 +0000 UTC m=+6458.041511775
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 describe po kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-149795 describe po kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-h5hcp
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-149795/192.168.39.109
Start Time:       Mon, 08 Sep 2025 11:58:31 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-msxnq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-msxnq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp to default-k8s-diff-port-149795
Warning  Failed     17m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    12m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     11m (x5 over 17m)     kubelet            Error: ErrImagePull
Warning  Failed     11m (x3 over 16m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m24s (x26 over 17m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m13s (x45 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m25s                 kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 logs kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-149795 logs kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard: exit status 1 (67.867423ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-h5hcp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-149795 logs kubernetes-dashboard-855c9754f9-h5hcp -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-149795 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-149795 logs -n 25: (1.260458801s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ start   │ -p newest-cni-549052 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:57 UTC │
	│ addons  │ enable dashboard -p no-preload-474007 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ start   │ -p no-preload-474007 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-256792 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ start   │ -p embed-certs-256792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:57 UTC │
	│ addons  │ enable metrics-server -p newest-cni-549052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:57 UTC │
	│ stop    │ -p newest-cni-549052 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-549052 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:57 UTC │
	│ start   │ -p newest-cni-549052 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-149795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-149795 │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:57 UTC │
	│ start   │ -p default-k8s-diff-port-149795 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-149795 │ jenkins │ v1.36.0 │ 08 Sep 25 11:57 UTC │ 08 Sep 25 11:58 UTC │
	│ image   │ embed-certs-256792 image list --format=json                                                                                                                                                                                                 │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ pause   │ -p embed-certs-256792 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ unpause │ -p embed-certs-256792 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ image   │ newest-cni-549052 image list --format=json                                                                                                                                                                                                  │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ pause   │ -p newest-cni-549052 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p embed-certs-256792                                                                                                                                                                                                                       │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p embed-certs-256792                                                                                                                                                                                                                       │ embed-certs-256792           │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ image   │ no-preload-474007 image list --format=json                                                                                                                                                                                                  │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ pause   │ -p no-preload-474007 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p newest-cni-549052                                                                                                                                                                                                                        │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ unpause │ -p no-preload-474007 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p newest-cni-549052                                                                                                                                                                                                                        │ newest-cni-549052            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p no-preload-474007                                                                                                                                                                                                                        │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	│ delete  │ -p no-preload-474007                                                                                                                                                                                                                        │ no-preload-474007            │ jenkins │ v1.36.0 │ 08 Sep 25 11:58 UTC │ 08 Sep 25 11:58 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:57:42
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:57:42.898398  812547 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:57:42.898550  812547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:57:42.898562  812547 out.go:374] Setting ErrFile to fd 2...
	I0908 11:57:42.898566  812547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:57:42.898823  812547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 11:57:42.899482  812547 out.go:368] Setting JSON to false
	I0908 11:57:42.900654  812547 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":74379,"bootTime":1757258284,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:57:42.900724  812547 start.go:140] virtualization: kvm guest
	I0908 11:57:42.902501  812547 out.go:179] * [default-k8s-diff-port-149795] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:57:42.903989  812547 notify.go:220] Checking for updates...
	I0908 11:57:42.903996  812547 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 11:57:42.906751  812547 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:57:42.908054  812547 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:57:42.909127  812547 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 11:57:42.910157  812547 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:57:42.911116  812547 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:57:42.912696  812547 config.go:182] Loaded profile config "default-k8s-diff-port-149795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:57:42.913410  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:57:42.913483  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:57:42.929818  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I0908 11:57:42.930451  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:57:42.931128  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:57:42.931169  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:57:42.931600  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:57:42.931872  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:57:42.932131  812547 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:57:42.932474  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:57:42.932533  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:57:42.948994  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41571
	I0908 11:57:42.949488  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:57:42.950108  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:57:42.950138  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:57:42.950472  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:57:42.950690  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:57:42.990429  812547 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 11:57:42.991742  812547 start.go:304] selected driver: kvm2
	I0908 11:57:42.991765  812547 start.go:918] validating driver "kvm2" against &{Name:default-k8s-diff-port-149795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:57:42.991903  812547 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:57:42.992936  812547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:57:42.993033  812547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21503-748170/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 11:57:43.010450  812547 install.go:137] /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 11:57:43.010937  812547 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:57:43.010979  812547 cni.go:84] Creating CNI manager for ""
	I0908 11:57:43.011021  812547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:57:43.011075  812547 start.go:348] cluster config:
	{Name:default-k8s-diff-port-149795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:57:43.011196  812547 iso.go:125] acquiring lock: {Name:mk013a3bcd14eba8870ec8e08630600588ab11c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:57:43.012784  812547 out.go:179] * Starting "default-k8s-diff-port-149795" primary control-plane node in "default-k8s-diff-port-149795" cluster
	I0908 11:57:40.491914  811458 node_ready.go:49] node "no-preload-474007" is "Ready"
	I0908 11:57:40.491945  811458 node_ready.go:38] duration metric: took 6.509479549s for node "no-preload-474007" to be "Ready" ...
	I0908 11:57:40.491961  811458 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:57:40.492011  811458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:57:40.518972  811458 api_server.go:72] duration metric: took 6.856993983s to wait for apiserver process to appear ...
	I0908 11:57:40.519007  811458 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:57:40.519036  811458 api_server.go:253] Checking apiserver healthz at https://192.168.61.59:8443/healthz ...
	I0908 11:57:40.526000  811458 api_server.go:279] https://192.168.61.59:8443/healthz returned 200:
	ok
	I0908 11:57:40.527220  811458 api_server.go:141] control plane version: v1.34.0
	I0908 11:57:40.527247  811458 api_server.go:131] duration metric: took 8.230769ms to wait for apiserver health ...
	I0908 11:57:40.527258  811458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:57:40.532690  811458 system_pods.go:59] 8 kube-system pods found
	I0908 11:57:40.532722  811458 system_pods.go:61] "coredns-66bc5c9577-nvjls" [1b079ef7-d1a6-4e01-a88c-b5c7fa725797] Running
	I0908 11:57:40.532734  811458 system_pods.go:61] "etcd-no-preload-474007" [8fd2fdfc-a6e2-4ec0-a61d-04bd593db882] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:57:40.532738  811458 system_pods.go:61] "kube-apiserver-no-preload-474007" [948963be-9734-4dec-b2aa-e97e0f7722e3] Running
	I0908 11:57:40.532744  811458 system_pods.go:61] "kube-controller-manager-no-preload-474007" [4a53f493-ad8e-42b8-bbb3-ce0b26bd5985] Running
	I0908 11:57:40.532748  811458 system_pods.go:61] "kube-proxy-9fljr" [63bf4b52-6670-4c76-af05-863f9e5f233e] Running
	I0908 11:57:40.532751  811458 system_pods.go:61] "kube-scheduler-no-preload-474007" [9c847320-1276-44a9-a435-f0b4e0939801] Running
	I0908 11:57:40.532757  811458 system_pods.go:61] "metrics-server-746fcd58dc-bbz2v" [a9b335ae-0a9f-4124-9a90-bf148a7580ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:57:40.532760  811458 system_pods.go:61] "storage-provisioner" [5ef0a874-a428-461f-8a06-9729c469a4b4] Running
	I0908 11:57:40.532767  811458 system_pods.go:74] duration metric: took 5.502732ms to wait for pod list to return data ...
	I0908 11:57:40.532778  811458 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:57:40.536081  811458 default_sa.go:45] found service account: "default"
	I0908 11:57:40.536103  811458 default_sa.go:55] duration metric: took 3.319566ms for default service account to be created ...
	I0908 11:57:40.536111  811458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:57:40.538802  811458 system_pods.go:86] 8 kube-system pods found
	I0908 11:57:40.538827  811458 system_pods.go:89] "coredns-66bc5c9577-nvjls" [1b079ef7-d1a6-4e01-a88c-b5c7fa725797] Running
	I0908 11:57:40.538840  811458 system_pods.go:89] "etcd-no-preload-474007" [8fd2fdfc-a6e2-4ec0-a61d-04bd593db882] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:57:40.538848  811458 system_pods.go:89] "kube-apiserver-no-preload-474007" [948963be-9734-4dec-b2aa-e97e0f7722e3] Running
	I0908 11:57:40.538859  811458 system_pods.go:89] "kube-controller-manager-no-preload-474007" [4a53f493-ad8e-42b8-bbb3-ce0b26bd5985] Running
	I0908 11:57:40.538864  811458 system_pods.go:89] "kube-proxy-9fljr" [63bf4b52-6670-4c76-af05-863f9e5f233e] Running
	I0908 11:57:40.538869  811458 system_pods.go:89] "kube-scheduler-no-preload-474007" [9c847320-1276-44a9-a435-f0b4e0939801] Running
	I0908 11:57:40.538878  811458 system_pods.go:89] "metrics-server-746fcd58dc-bbz2v" [a9b335ae-0a9f-4124-9a90-bf148a7580ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:57:40.538886  811458 system_pods.go:89] "storage-provisioner" [5ef0a874-a428-461f-8a06-9729c469a4b4] Running
	I0908 11:57:40.538898  811458 system_pods.go:126] duration metric: took 2.779097ms to wait for k8s-apps to be running ...
	I0908 11:57:40.538912  811458 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:57:40.538969  811458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:57:40.557715  811458 system_svc.go:56] duration metric: took 18.796502ms WaitForService to wait for kubelet
	I0908 11:57:40.557743  811458 kubeadm.go:578] duration metric: took 6.895770979s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:57:40.557769  811458 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:57:40.563199  811458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:57:40.563230  811458 node_conditions.go:123] node cpu capacity is 2
	I0908 11:57:40.563245  811458 node_conditions.go:105] duration metric: took 5.46967ms to run NodePressure ...
	I0908 11:57:40.563261  811458 start.go:241] waiting for startup goroutines ...
	I0908 11:57:40.563272  811458 start.go:246] waiting for cluster config update ...
	I0908 11:57:40.563312  811458 start.go:255] writing updated cluster config ...
	I0908 11:57:40.563673  811458 ssh_runner.go:195] Run: rm -f paused
	I0908 11:57:40.570429  811458 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:57:40.574220  811458 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nvjls" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:40.580056  811458 pod_ready.go:94] pod "coredns-66bc5c9577-nvjls" is "Ready"
	I0908 11:57:40.580080  811458 pod_ready.go:86] duration metric: took 5.831172ms for pod "coredns-66bc5c9577-nvjls" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:40.584053  811458 pod_ready.go:83] waiting for pod "etcd-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.589215  811458 pod_ready.go:94] pod "etcd-no-preload-474007" is "Ready"
	I0908 11:57:41.589265  811458 pod_ready.go:86] duration metric: took 1.005188219s for pod "etcd-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.592034  811458 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.597858  811458 pod_ready.go:94] pod "kube-apiserver-no-preload-474007" is "Ready"
	I0908 11:57:41.597893  811458 pod_ready.go:86] duration metric: took 5.830632ms for pod "kube-apiserver-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.600546  811458 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.777006  811458 pod_ready.go:94] pod "kube-controller-manager-no-preload-474007" is "Ready"
	I0908 11:57:41.777036  811458 pod_ready.go:86] duration metric: took 176.468219ms for pod "kube-controller-manager-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:41.976524  811458 pod_ready.go:83] waiting for pod "kube-proxy-9fljr" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:42.375984  811458 pod_ready.go:94] pod "kube-proxy-9fljr" is "Ready"
	I0908 11:57:42.376021  811458 pod_ready.go:86] duration metric: took 399.459333ms for pod "kube-proxy-9fljr" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:42.576413  811458 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:42.975354  811458 pod_ready.go:94] pod "kube-scheduler-no-preload-474007" is "Ready"
	I0908 11:57:42.975387  811458 pod_ready.go:86] duration metric: took 398.943076ms for pod "kube-scheduler-no-preload-474007" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:42.975407  811458 pod_ready.go:40] duration metric: took 2.404937403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:57:43.028540  811458 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:57:43.030362  811458 out.go:179] * Done! kubectl is now configured to use "no-preload-474007" cluster and "default" namespace by default
	I0908 11:57:42.113167  811802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:57:42.151595  811802 node_ready.go:35] waiting up to 6m0s for node "embed-certs-256792" to be "Ready" ...
	I0908 11:57:42.154026  811802 node_ready.go:49] node "embed-certs-256792" is "Ready"
	I0908 11:57:42.154059  811802 node_ready.go:38] duration metric: took 2.406931ms for node "embed-certs-256792" to be "Ready" ...
	I0908 11:57:42.154073  811802 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:57:42.154122  811802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:57:42.179049  811802 api_server.go:72] duration metric: took 369.167387ms to wait for apiserver process to appear ...
	I0908 11:57:42.179073  811802 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:57:42.179095  811802 api_server.go:253] Checking apiserver healthz at https://192.168.50.136:8443/healthz ...
	I0908 11:57:42.185770  811802 api_server.go:279] https://192.168.50.136:8443/healthz returned 200:
	ok
	I0908 11:57:42.187424  811802 api_server.go:141] control plane version: v1.34.0
	I0908 11:57:42.187456  811802 api_server.go:131] duration metric: took 8.373725ms to wait for apiserver health ...
	I0908 11:57:42.187466  811802 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:57:42.191638  811802 system_pods.go:59] 8 kube-system pods found
	I0908 11:57:42.191666  811802 system_pods.go:61] "coredns-66bc5c9577-24xv6" [eb1ab4a7-273c-49a1-8d80-2e3145582e9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:57:42.191675  811802 system_pods.go:61] "etcd-embed-certs-256792" [5012dd79-f6a2-49b6-a6ba-e3cb31c0ab84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:57:42.191683  811802 system_pods.go:61] "kube-apiserver-embed-certs-256792" [d764f944-ceb8-4861-be25-e30f034a4c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:57:42.191705  811802 system_pods.go:61] "kube-controller-manager-embed-certs-256792" [63935a70-f702-45ee-9904-7d07ee903d79] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:57:42.191714  811802 system_pods.go:61] "kube-proxy-ph8c8" [bae0a504-7714-4c5b-af89-54a0f2d5c5fa] Running
	I0908 11:57:42.191720  811802 system_pods.go:61] "kube-scheduler-embed-certs-256792" [64de836d-209c-4fcb-91e5-a8266cd048c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:57:42.191725  811802 system_pods.go:61] "metrics-server-746fcd58dc-97dr2" [c00533cc-ec1a-45af-a5ee-4f3d7e77d95f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:57:42.191729  811802 system_pods.go:61] "storage-provisioner" [bb98e575-b5ce-4181-b7e5-9ea41fde8295] Running
	I0908 11:57:42.191735  811802 system_pods.go:74] duration metric: took 4.262676ms to wait for pod list to return data ...
	I0908 11:57:42.191745  811802 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:57:42.195175  811802 default_sa.go:45] found service account: "default"
	I0908 11:57:42.195203  811802 default_sa.go:55] duration metric: took 3.450712ms for default service account to be created ...
	I0908 11:57:42.195216  811802 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:57:42.200073  811802 system_pods.go:86] 8 kube-system pods found
	I0908 11:57:42.200097  811802 system_pods.go:89] "coredns-66bc5c9577-24xv6" [eb1ab4a7-273c-49a1-8d80-2e3145582e9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:57:42.200127  811802 system_pods.go:89] "etcd-embed-certs-256792" [5012dd79-f6a2-49b6-a6ba-e3cb31c0ab84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:57:42.200136  811802 system_pods.go:89] "kube-apiserver-embed-certs-256792" [d764f944-ceb8-4861-be25-e30f034a4c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:57:42.200143  811802 system_pods.go:89] "kube-controller-manager-embed-certs-256792" [63935a70-f702-45ee-9904-7d07ee903d79] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:57:42.200147  811802 system_pods.go:89] "kube-proxy-ph8c8" [bae0a504-7714-4c5b-af89-54a0f2d5c5fa] Running
	I0908 11:57:42.200152  811802 system_pods.go:89] "kube-scheduler-embed-certs-256792" [64de836d-209c-4fcb-91e5-a8266cd048c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:57:42.200157  811802 system_pods.go:89] "metrics-server-746fcd58dc-97dr2" [c00533cc-ec1a-45af-a5ee-4f3d7e77d95f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:57:42.200163  811802 system_pods.go:89] "storage-provisioner" [bb98e575-b5ce-4181-b7e5-9ea41fde8295] Running
	I0908 11:57:42.200171  811802 system_pods.go:126] duration metric: took 4.949191ms to wait for k8s-apps to be running ...
	I0908 11:57:42.200177  811802 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:57:42.200218  811802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:57:42.237702  811802 system_svc.go:56] duration metric: took 37.51307ms WaitForService to wait for kubelet
	I0908 11:57:42.237736  811802 kubeadm.go:578] duration metric: took 427.859269ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:57:42.237761  811802 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:57:42.243804  811802 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:57:42.243831  811802 node_conditions.go:123] node cpu capacity is 2
	I0908 11:57:42.243846  811802 node_conditions.go:105] duration metric: took 6.080641ms to run NodePressure ...
	I0908 11:57:42.243861  811802 start.go:241] waiting for startup goroutines ...
	I0908 11:57:42.273355  811802 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 11:57:42.273380  811802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 11:57:42.281406  811802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:57:42.293648  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 11:57:42.293677  811802 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 11:57:42.306147  811802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:57:42.331187  811802 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 11:57:42.331223  811802 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 11:57:42.361906  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 11:57:42.361934  811802 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 11:57:42.396707  811802 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:57:42.396744  811802 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 11:57:42.422129  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 11:57:42.422161  811802 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 11:57:42.446878  811802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:57:42.489551  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 11:57:42.489587  811802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 11:57:42.574460  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 11:57:42.574593  811802 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 11:57:42.656186  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 11:57:42.656213  811802 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 11:57:42.730937  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 11:57:42.730976  811802 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 11:57:42.801855  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 11:57:42.801878  811802 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 11:57:42.865770  811802 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:57:42.865799  811802 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 11:57:42.937292  811802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:57:44.045454  811802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.739259027s)
	I0908 11:57:44.045539  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.045556  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.045571  811802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.764122817s)
	I0908 11:57:44.045629  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.045644  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.045886  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.045902  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.045912  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.045919  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.046002  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.046040  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.046072  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.046088  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.046096  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.046126  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.046145  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.046416  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.046422  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.046435  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.066630  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.066651  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.066953  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.066969  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.066973  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.158538  811802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.711611263s)
	I0908 11:57:44.158592  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.158604  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.158971  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.158994  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.159004  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.159012  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.159012  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.159263  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.159301  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.159349  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.159365  811802 addons.go:479] Verifying addon metrics-server=true in "embed-certs-256792"
	I0908 11:57:44.453332  811802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.515974628s)
	I0908 11:57:44.453409  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.453427  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.453789  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.453811  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.453825  811802 main.go:141] libmachine: Making call to close driver server
	I0908 11:57:44.453834  811802 main.go:141] libmachine: (embed-certs-256792) Calling .Close
	I0908 11:57:44.454113  811802 main.go:141] libmachine: (embed-certs-256792) DBG | Closing plugin on server side
	I0908 11:57:44.454157  811802 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:57:44.454167  811802 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:57:44.457548  811802 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-256792 addons enable metrics-server
	
	I0908 11:57:44.459071  811802 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 11:57:43.572266  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:43.572778  812159 main.go:141] libmachine: (newest-cni-549052) DBG | unable to find current IP address of domain newest-cni-549052 in network mk-newest-cni-549052
	I0908 11:57:43.572830  812159 main.go:141] libmachine: (newest-cni-549052) DBG | I0908 11:57:43.572770  812218 retry.go:31] will retry after 4.203206967s: waiting for domain to come up
	I0908 11:57:44.460508  811802 addons.go:514] duration metric: took 2.650596404s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 11:57:44.460557  811802 start.go:246] waiting for cluster config update ...
	I0908 11:57:44.460590  811802 start.go:255] writing updated cluster config ...
	I0908 11:57:44.460866  811802 ssh_runner.go:195] Run: rm -f paused
	I0908 11:57:44.471864  811802 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:57:44.477590  811802 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24xv6" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 11:57:46.484885  811802 pod_ready.go:104] pod "coredns-66bc5c9577-24xv6" is not "Ready", error: <nil>
	I0908 11:57:43.013813  812547 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:57:43.013869  812547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 11:57:43.013882  812547 cache.go:58] Caching tarball of preloaded images
	I0908 11:57:43.013978  812547 preload.go:172] Found /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 11:57:43.013992  812547 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 11:57:43.014149  812547 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/config.json ...
	I0908 11:57:43.014393  812547 start.go:360] acquireMachinesLock for default-k8s-diff-port-149795: {Name:mkc620e3900da426b9c156141af1783a234a8bd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 11:57:49.235322  812547 start.go:364] duration metric: took 6.220859275s to acquireMachinesLock for "default-k8s-diff-port-149795"
	I0908 11:57:49.235413  812547 start.go:96] Skipping create...Using existing machine configuration
	I0908 11:57:49.235450  812547 fix.go:54] fixHost starting: 
	I0908 11:57:49.235913  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:57:49.235978  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:57:49.255609  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
	I0908 11:57:49.256215  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:57:49.256774  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:57:49.256800  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:57:49.257283  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:57:49.257495  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:57:49.257678  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:57:49.259525  812547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-149795: state=Stopped err=<nil>
	I0908 11:57:49.259552  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	W0908 11:57:49.259687  812547 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 11:57:47.779359  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.779924  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has current primary IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.779950  812159 main.go:141] libmachine: (newest-cni-549052) found domain IP: 192.168.72.253
	I0908 11:57:47.779964  812159 main.go:141] libmachine: (newest-cni-549052) reserving static IP address...
	I0908 11:57:47.780434  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "newest-cni-549052", mac: "52:54:00:c8:55:ce", ip: "192.168.72.253"} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:47.780479  812159 main.go:141] libmachine: (newest-cni-549052) DBG | skip adding static IP to network mk-newest-cni-549052 - found existing host DHCP lease matching {name: "newest-cni-549052", mac: "52:54:00:c8:55:ce", ip: "192.168.72.253"}
	I0908 11:57:47.780490  812159 main.go:141] libmachine: (newest-cni-549052) reserved static IP address 192.168.72.253 for domain newest-cni-549052
	I0908 11:57:47.780506  812159 main.go:141] libmachine: (newest-cni-549052) waiting for SSH...
	I0908 11:57:47.780516  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Getting to WaitForSSH function...
	I0908 11:57:47.782769  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.783108  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:47.783130  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.783270  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Using SSH client type: external
	I0908 11:57:47.783343  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Using SSH private key: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa (-rw-------)
	I0908 11:57:47.783383  812159 main.go:141] libmachine: (newest-cni-549052) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 11:57:47.783407  812159 main.go:141] libmachine: (newest-cni-549052) DBG | About to run SSH command:
	I0908 11:57:47.783418  812159 main.go:141] libmachine: (newest-cni-549052) DBG | exit 0
	I0908 11:57:47.913532  812159 main.go:141] libmachine: (newest-cni-549052) DBG | SSH cmd err, output: <nil>: 
	I0908 11:57:47.913970  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetConfigRaw
	I0908 11:57:47.914706  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetIP
	I0908 11:57:47.917266  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.917770  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:47.917807  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.918057  812159 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/config.json ...
	I0908 11:57:47.918245  812159 machine.go:93] provisionDockerMachine start ...
	I0908 11:57:47.918264  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:47.918487  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:47.920858  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.921217  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:47.921254  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:47.921367  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:47.921527  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:47.921672  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:47.921787  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:47.921932  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:47.922188  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:47.922201  812159 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:57:48.042912  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:57:48.042946  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetMachineName
	I0908 11:57:48.043249  812159 buildroot.go:166] provisioning hostname "newest-cni-549052"
	I0908 11:57:48.043281  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetMachineName
	I0908 11:57:48.043490  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.046580  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.046968  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.046999  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.047171  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.047362  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.047546  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.047673  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.047836  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:48.048117  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:48.048134  812159 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-549052 && echo "newest-cni-549052" | sudo tee /etc/hostname
	I0908 11:57:48.187686  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-549052
	
	I0908 11:57:48.187712  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.190842  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.191117  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.191147  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.191293  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.191523  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.191671  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.191823  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.192009  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:48.192281  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:48.192305  812159 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-549052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-549052/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-549052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:57:48.321564  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:57:48.321597  812159 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21503-748170/.minikube CaCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21503-748170/.minikube}
	I0908 11:57:48.321630  812159 buildroot.go:174] setting up certificates
	I0908 11:57:48.321640  812159 provision.go:84] configureAuth start
	I0908 11:57:48.321648  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetMachineName
	I0908 11:57:48.321954  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetIP
	I0908 11:57:48.325174  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.325709  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.325733  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.325937  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.328870  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.329300  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.329342  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.329486  812159 provision.go:143] copyHostCerts
	I0908 11:57:48.329580  812159 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem, removing ...
	I0908 11:57:48.329603  812159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem
	I0908 11:57:48.329674  812159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem (1078 bytes)
	I0908 11:57:48.329823  812159 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem, removing ...
	I0908 11:57:48.329838  812159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem
	I0908 11:57:48.329872  812159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem (1123 bytes)
	I0908 11:57:48.329984  812159 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem, removing ...
	I0908 11:57:48.329997  812159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem
	I0908 11:57:48.330028  812159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem (1675 bytes)
	I0908 11:57:48.330118  812159 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem org=jenkins.newest-cni-549052 san=[127.0.0.1 192.168.72.253 localhost minikube newest-cni-549052]
	I0908 11:57:48.491599  812159 provision.go:177] copyRemoteCerts
	I0908 11:57:48.491674  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:57:48.491700  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.494839  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.495296  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.495327  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.495533  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.495725  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.495887  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.496027  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:57:48.585972  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 11:57:48.619847  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 11:57:48.649609  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 11:57:48.684698  812159 provision.go:87] duration metric: took 363.041145ms to configureAuth
	I0908 11:57:48.684734  812159 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:57:48.684978  812159 config.go:182] Loaded profile config "newest-cni-549052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:57:48.685089  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.687895  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.688419  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.688453  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.688668  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.688897  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.689047  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.689187  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.689353  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:48.689559  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:48.689576  812159 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:57:48.959457  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:57:48.959506  812159 machine.go:96] duration metric: took 1.041228522s to provisionDockerMachine
	I0908 11:57:48.959523  812159 start.go:293] postStartSetup for "newest-cni-549052" (driver="kvm2")
	I0908 11:57:48.959538  812159 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:57:48.959561  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:48.959971  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:57:48.960004  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:48.963119  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.963623  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:48.963654  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:48.963775  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:48.964031  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:48.964226  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:48.964436  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:57:49.059224  812159 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:57:49.064132  812159 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:57:49.064163  812159 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/addons for local assets ...
	I0908 11:57:49.064224  812159 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/files for local assets ...
	I0908 11:57:49.064305  812159 filesync.go:149] local asset: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem -> 7523322.pem in /etc/ssl/certs
	I0908 11:57:49.064411  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:57:49.076217  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:57:49.105831  812159 start.go:296] duration metric: took 146.290104ms for postStartSetup
	I0908 11:57:49.105875  812159 fix.go:56] duration metric: took 23.926590374s for fixHost
	I0908 11:57:49.105902  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:49.108745  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.109088  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:49.109118  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.109350  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:49.109583  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:49.109754  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:49.109896  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:49.110082  812159 main.go:141] libmachine: Using SSH client type: native
	I0908 11:57:49.110306  812159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.72.253 22 <nil> <nil>}
	I0908 11:57:49.110322  812159 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:57:49.235107  812159 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757332669.209146390
	
	I0908 11:57:49.235139  812159 fix.go:216] guest clock: 1757332669.209146390
	I0908 11:57:49.235150  812159 fix.go:229] Guest: 2025-09-08 11:57:49.20914639 +0000 UTC Remote: 2025-09-08 11:57:49.105879402 +0000 UTC m=+28.497071736 (delta=103.266988ms)
	I0908 11:57:49.235197  812159 fix.go:200] guest clock delta is within tolerance: 103.266988ms
	I0908 11:57:49.235207  812159 start.go:83] releasing machines lock for "newest-cni-549052", held for 24.055963092s
	I0908 11:57:49.235243  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:49.235556  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetIP
	I0908 11:57:49.239174  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.239613  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:49.239655  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.239860  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:49.240442  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:49.240632  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:57:49.240739  812159 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:57:49.240780  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:49.240870  812159 ssh_runner.go:195] Run: cat /version.json
	I0908 11:57:49.240898  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:57:49.244158  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.244589  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.244715  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:49.244769  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.244867  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:49.244977  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:49.244997  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:49.245067  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:49.245267  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:57:49.245320  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:49.245418  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:57:49.245482  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:57:49.245890  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:57:49.246153  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:57:49.362405  812159 ssh_runner.go:195] Run: systemctl --version
	I0908 11:57:49.370661  812159 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:57:49.528289  812159 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 11:57:49.538684  812159 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 11:57:49.538751  812159 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:57:49.565563  812159 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:57:49.565593  812159 start.go:495] detecting cgroup driver to use...
	I0908 11:57:49.565761  812159 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:57:49.592350  812159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:57:49.613551  812159 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:57:49.613689  812159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:57:49.632732  812159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:57:49.651906  812159 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:57:49.834745  812159 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:57:50.039957  812159 docker.go:234] disabling docker service ...
	I0908 11:57:50.040032  812159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:57:50.061560  812159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:57:50.081022  812159 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:57:50.339178  812159 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:57:50.552105  812159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:57:50.576407  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:57:50.608406  812159 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 11:57:50.608591  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.626566  812159 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:57:50.626768  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.647898  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.663558  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.680626  812159 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:57:50.701014  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.719402  812159 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.746088  812159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:57:50.764464  812159 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:57:50.779626  812159 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:57:50.779714  812159 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:57:50.809097  812159 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:57:50.827846  812159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:57:51.008407  812159 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:57:51.171326  812159 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:57:51.171444  812159 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:57:51.178059  812159 start.go:563] Will wait 60s for crictl version
	I0908 11:57:51.178133  812159 ssh_runner.go:195] Run: which crictl
	I0908 11:57:51.183452  812159 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:57:51.240830  812159 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 11:57:51.240940  812159 ssh_runner.go:195] Run: crio --version
	I0908 11:57:51.286378  812159 ssh_runner.go:195] Run: crio --version
	I0908 11:57:51.338118  812159 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 11:57:51.339143  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetIP
	I0908 11:57:51.342963  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:51.343507  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:57:51.343536  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:57:51.343813  812159 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0908 11:57:51.351152  812159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:57:51.375657  812159 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0908 11:57:47.983972  811802 pod_ready.go:94] pod "coredns-66bc5c9577-24xv6" is "Ready"
	I0908 11:57:47.984004  811802 pod_ready.go:86] duration metric: took 3.506380235s for pod "coredns-66bc5c9577-24xv6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:47.993093  811802 pod_ready.go:83] waiting for pod "etcd-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 11:57:50.014039  811802 pod_ready.go:104] pod "etcd-embed-certs-256792" is not "Ready", error: <nil>
	I0908 11:57:49.261606  812547 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-149795" ...
	I0908 11:57:49.261638  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Start
	I0908 11:57:49.261799  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) starting domain...
	I0908 11:57:49.261822  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) ensuring networks are active...
	I0908 11:57:49.262614  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Ensuring network default is active
	I0908 11:57:49.262968  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Ensuring network mk-default-k8s-diff-port-149795 is active
	I0908 11:57:49.263618  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) getting domain XML...
	I0908 11:57:49.265935  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) creating domain...
	I0908 11:57:50.935105  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) waiting for IP...
	I0908 11:57:50.936449  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:50.937152  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:50.937273  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:50.937152  812639 retry.go:31] will retry after 249.327002ms: waiting for domain to come up
	I0908 11:57:51.188178  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.189053  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.189248  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:51.189137  812639 retry.go:31] will retry after 265.912093ms: waiting for domain to come up
	I0908 11:57:51.456953  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.458188  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.458219  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:51.458148  812639 retry.go:31] will retry after 343.506787ms: waiting for domain to come up
	I0908 11:57:51.803902  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.804520  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:51.804548  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:51.804515  812639 retry.go:31] will retry after 600.967003ms: waiting for domain to come up
	I0908 11:57:52.407376  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:52.408082  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:52.408114  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:52.408066  812639 retry.go:31] will retry after 613.161152ms: waiting for domain to come up
	I0908 11:57:51.377153  812159 kubeadm.go:875] updating cluster {Name:newest-cni-549052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.0 ClusterName:newest-cni-549052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.253 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:57:51.377322  812159 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:57:51.377394  812159 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:57:51.440665  812159 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0908 11:57:51.440801  812159 ssh_runner.go:195] Run: which lz4
	I0908 11:57:51.446425  812159 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 11:57:51.453028  812159 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 11:57:51.453068  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0908 11:57:53.596159  812159 crio.go:462] duration metric: took 2.149798464s to copy over tarball
	I0908 11:57:53.596368  812159 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W0908 11:57:52.876879  811802 pod_ready.go:104] pod "etcd-embed-certs-256792" is not "Ready", error: <nil>
	I0908 11:57:54.526011  811802 pod_ready.go:94] pod "etcd-embed-certs-256792" is "Ready"
	I0908 11:57:54.526052  811802 pod_ready.go:86] duration metric: took 6.532925098s for pod "etcd-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:54.537288  811802 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:54.556382  811802 pod_ready.go:94] pod "kube-apiserver-embed-certs-256792" is "Ready"
	I0908 11:57:54.556421  811802 pod_ready.go:86] duration metric: took 19.098237ms for pod "kube-apiserver-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:54.564214  811802 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.082949  811802 pod_ready.go:94] pod "kube-controller-manager-embed-certs-256792" is "Ready"
	I0908 11:57:55.082992  811802 pod_ready.go:86] duration metric: took 518.748647ms for pod "kube-controller-manager-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.090074  811802 pod_ready.go:83] waiting for pod "kube-proxy-ph8c8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.112048  811802 pod_ready.go:94] pod "kube-proxy-ph8c8" is "Ready"
	I0908 11:57:55.112141  811802 pod_ready.go:86] duration metric: took 22.036176ms for pod "kube-proxy-ph8c8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.299989  811802 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.700912  811802 pod_ready.go:94] pod "kube-scheduler-embed-certs-256792" is "Ready"
	I0908 11:57:55.701001  811802 pod_ready.go:86] duration metric: took 400.973642ms for pod "kube-scheduler-embed-certs-256792" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:57:55.701031  811802 pod_ready.go:40] duration metric: took 11.229130008s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:57:55.783175  811802 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:57:55.785247  811802 out.go:179] * Done! kubectl is now configured to use "embed-certs-256792" cluster and "default" namespace by default
	I0908 11:57:53.022495  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:53.023198  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:53.023226  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:53.023163  812639 retry.go:31] will retry after 728.029384ms: waiting for domain to come up
	I0908 11:57:53.752306  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:53.752621  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:53.752646  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:53.752591  812639 retry.go:31] will retry after 871.524139ms: waiting for domain to come up
	I0908 11:57:54.625864  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:54.626780  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:54.626808  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:54.626664  812639 retry.go:31] will retry after 1.229648452s: waiting for domain to come up
	I0908 11:57:55.858560  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:55.859312  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:55.859345  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:55.859213  812639 retry.go:31] will retry after 1.332770377s: waiting for domain to come up
	I0908 11:57:57.194137  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:57.194904  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:57.194937  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:57.194805  812639 retry.go:31] will retry after 1.80848352s: waiting for domain to come up
	I0908 11:57:55.970733  812159 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.374312152s)
	I0908 11:57:55.970795  812159 crio.go:469] duration metric: took 2.374516649s to extract the tarball
	I0908 11:57:55.970807  812159 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 11:57:56.059942  812159 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:57:56.124762  812159 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:57:56.124795  812159 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:57:56.124807  812159 kubeadm.go:926] updating node { 192.168.72.253 8443 v1.34.0 crio true true} ...
	I0908 11:57:56.124970  812159 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-549052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-549052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:57:56.125067  812159 ssh_runner.go:195] Run: crio config
	I0908 11:57:56.196101  812159 cni.go:84] Creating CNI manager for ""
	I0908 11:57:56.196127  812159 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:57:56.196149  812159 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0908 11:57:56.196180  812159 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.253 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-549052 NodeName:newest-cni-549052 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:57:56.196346  812159 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-549052"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.253"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.253"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:57:56.196418  812159 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:57:56.215211  812159 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:57:56.215376  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:57:56.234276  812159 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0908 11:57:56.271439  812159 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:57:56.305092  812159 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0908 11:57:56.334775  812159 ssh_runner.go:195] Run: grep 192.168.72.253	control-plane.minikube.internal$ /etc/hosts
	I0908 11:57:56.355649  812159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:57:56.376879  812159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:57:56.607540  812159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:57:56.638865  812159 certs.go:68] Setting up /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052 for IP: 192.168.72.253
	I0908 11:57:56.638898  812159 certs.go:194] generating shared ca certs ...
	I0908 11:57:56.638925  812159 certs.go:226] acquiring lock for ca certs: {Name:mkaa8fe7cb1fe9bdb745b85589d42151c557e20e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:57:56.639125  812159 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key
	I0908 11:57:56.639185  812159 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key
	I0908 11:57:56.639203  812159 certs.go:256] generating profile certs ...
	I0908 11:57:56.639330  812159 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/client.key
	I0908 11:57:56.639405  812159 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/apiserver.key.23d252d4
	I0908 11:57:56.639459  812159 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/proxy-client.key
	I0908 11:57:56.639640  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem (1338 bytes)
	W0908 11:57:56.639687  812159 certs.go:480] ignoring /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332_empty.pem, impossibly tiny 0 bytes
	I0908 11:57:56.639696  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 11:57:56.639735  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem (1078 bytes)
	I0908 11:57:56.639776  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:57:56.639806  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem (1675 bytes)
	I0908 11:57:56.639866  812159 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:57:56.645836  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:57:56.704224  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:57:56.757396  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:57:56.802349  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 11:57:56.845704  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 11:57:56.890847  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 11:57:56.939028  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:57:56.977212  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/newest-cni-549052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 11:57:57.015764  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /usr/share/ca-certificates/7523322.pem (1708 bytes)
	I0908 11:57:57.053427  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:57:57.091952  812159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem --> /usr/share/ca-certificates/752332.pem (1338 bytes)
	I0908 11:57:57.133084  812159 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:57:57.160776  812159 ssh_runner.go:195] Run: openssl version
	I0908 11:57:57.168578  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752332.pem && ln -fs /usr/share/ca-certificates/752332.pem /etc/ssl/certs/752332.pem"
	I0908 11:57:57.187551  812159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752332.pem
	I0908 11:57:57.195393  812159 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:41 /usr/share/ca-certificates/752332.pem
	I0908 11:57:57.195470  812159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752332.pem
	I0908 11:57:57.208192  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752332.pem /etc/ssl/certs/51391683.0"
	I0908 11:57:57.230215  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7523322.pem && ln -fs /usr/share/ca-certificates/7523322.pem /etc/ssl/certs/7523322.pem"
	I0908 11:57:57.245062  812159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7523322.pem
	I0908 11:57:57.251266  812159 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:41 /usr/share/ca-certificates/7523322.pem
	I0908 11:57:57.251400  812159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7523322.pem
	I0908 11:57:57.259619  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7523322.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:57:57.275438  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:57:57.291284  812159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:57:57.297096  812159 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:57:57.297171  812159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:57:57.304774  812159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:57:57.320721  812159 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:57:57.326703  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 11:57:57.337457  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 11:57:57.345786  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 11:57:57.355892  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 11:57:57.365198  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 11:57:57.378654  812159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 11:57:57.387545  812159 kubeadm.go:392] StartCluster: {Name:newest-cni-549052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.0 ClusterName:newest-cni-549052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.253 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:57:57.387656  812159 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:57:57.387758  812159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:57:57.464500  812159 cri.go:89] found id: ""
	I0908 11:57:57.464693  812159 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:57:57.486928  812159 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 11:57:57.487025  812159 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 11:57:57.487123  812159 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 11:57:57.510272  812159 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:57:57.511210  812159 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-549052" does not appear in /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:57:57.511776  812159 kubeconfig.go:62] /home/jenkins/minikube-integration/21503-748170/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-549052" cluster setting kubeconfig missing "newest-cni-549052" context setting]
	I0908 11:57:57.512553  812159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:57:57.623218  812159 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 11:57:57.637757  812159 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.72.253
	I0908 11:57:57.637811  812159 kubeadm.go:1152] stopping kube-system containers ...
	I0908 11:57:57.637832  812159 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0908 11:57:57.637908  812159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:57:57.691283  812159 cri.go:89] found id: ""
	I0908 11:57:57.691379  812159 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 11:57:57.716106  812159 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:57:57.731578  812159 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:57:57.731603  812159 kubeadm.go:157] found existing configuration files:
	
	I0908 11:57:57.731664  812159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 11:57:57.746531  812159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:57:57.746608  812159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:57:57.759257  812159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 11:57:57.772840  812159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:57:57.772906  812159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:57:57.786965  812159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 11:57:57.800254  812159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:57:57.800338  812159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:57:57.812215  812159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 11:57:57.823082  812159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:57:57.823148  812159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:57:57.835098  812159 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:57:57.847109  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:57.918103  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:59.272863  812159 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.354713833s)
	I0908 11:57:59.272905  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:59.651778  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:59.764562  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:57:59.901757  812159 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:57:59.901864  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:00.402003  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:57:59.005793  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:57:59.006326  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:57:59.006354  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:57:59.006279  812639 retry.go:31] will retry after 2.473556197s: waiting for domain to come up
	I0908 11:58:01.481350  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:01.482159  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:58:01.482187  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:58:01.482010  812639 retry.go:31] will retry after 2.823753092s: waiting for domain to come up
	I0908 11:58:00.902932  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:01.402528  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:01.464965  812159 api_server.go:72] duration metric: took 1.563207193s to wait for apiserver process to appear ...
	I0908 11:58:01.465008  812159 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:58:01.465038  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:01.465860  812159 api_server.go:269] stopped: https://192.168.72.253:8443/healthz: Get "https://192.168.72.253:8443/healthz": dial tcp 192.168.72.253:8443: connect: connection refused
	I0908 11:58:01.965402  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:05.019304  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:58:05.019353  812159 api_server.go:103] status: https://192.168.72.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:58:05.019374  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:05.081963  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:58:05.081995  812159 api_server.go:103] status: https://192.168.72.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:58:05.466024  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:05.471201  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:05.471232  812159 api_server.go:103] status: https://192.168.72.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:05.965877  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:05.984640  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:05.984751  812159 api_server.go:103] status: https://192.168.72.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:06.465387  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:06.474486  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 200:
	ok
	I0908 11:58:06.487363  812159 api_server.go:141] control plane version: v1.34.0
	I0908 11:58:06.487395  812159 api_server.go:131] duration metric: took 5.022379369s to wait for apiserver health ...
	I0908 11:58:06.487406  812159 cni.go:84] Creating CNI manager for ""
	I0908 11:58:06.487413  812159 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:58:06.488862  812159 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 11:58:06.490427  812159 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 11:58:06.532385  812159 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 11:58:06.584116  812159 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:58:06.591628  812159 system_pods.go:59] 8 kube-system pods found
	I0908 11:58:06.591696  812159 system_pods.go:61] "coredns-66bc5c9577-k9fz2" [56b6d720-5155-4da7-b02f-fd0a70f84b08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:06.591710  812159 system_pods.go:61] "etcd-newest-cni-549052" [6f3da5eb-9f20-4a77-b5c4-62c1b9a274d7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:06.591719  812159 system_pods.go:61] "kube-apiserver-newest-cni-549052" [b4dfa754-a3c0-4462-a3e9-e8c9826f82b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:06.591728  812159 system_pods.go:61] "kube-controller-manager-newest-cni-549052" [4527e053-5868-4788-a6d8-09c02292d1a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:06.591735  812159 system_pods.go:61] "kube-proxy-n9kwb" [9d23138f-39c4-4ffa-8e33-e2f0eaea4051] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 11:58:06.591744  812159 system_pods.go:61] "kube-scheduler-newest-cni-549052" [376f1898-d179-4213-82dc-6eb522068d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:06.591752  812159 system_pods.go:61] "metrics-server-746fcd58dc-4jzrw" [295f1a4b-153b-4b9c-bfb1-38a153f63a87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:06.591764  812159 system_pods.go:61] "storage-provisioner" [5e5d3e8c-59de-401f-bb0a-4ab29de93cdf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 11:58:06.591774  812159 system_pods.go:74] duration metric: took 7.627257ms to wait for pod list to return data ...
	I0908 11:58:06.591792  812159 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:58:06.598308  812159 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:58:06.598355  812159 node_conditions.go:123] node cpu capacity is 2
	I0908 11:58:06.598377  812159 node_conditions.go:105] duration metric: took 6.573897ms to run NodePressure ...
	I0908 11:58:06.598403  812159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:06.918992  812159 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:58:06.937515  812159 ops.go:34] apiserver oom_adj: -16
	I0908 11:58:06.937544  812159 kubeadm.go:593] duration metric: took 9.450498392s to restartPrimaryControlPlane
	I0908 11:58:06.937557  812159 kubeadm.go:394] duration metric: took 9.550021975s to StartCluster
	I0908 11:58:06.937584  812159 settings.go:142] acquiring lock: {Name:mk18c67e9470bbfdfeaf7a5d3ce5d7a1813bc966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:06.937686  812159 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:58:06.939400  812159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:06.939706  812159 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.253 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:58:06.939802  812159 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 11:58:06.939909  812159 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-549052"
	I0908 11:58:06.939931  812159 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-549052"
	W0908 11:58:06.939945  812159 addons.go:247] addon storage-provisioner should already be in state true
	I0908 11:58:06.939971  812159 addons.go:69] Setting default-storageclass=true in profile "newest-cni-549052"
	I0908 11:58:06.940015  812159 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-549052"
	I0908 11:58:06.939997  812159 addons.go:69] Setting metrics-server=true in profile "newest-cni-549052"
	I0908 11:58:06.940034  812159 addons.go:238] Setting addon metrics-server=true in "newest-cni-549052"
	W0908 11:58:06.940043  812159 addons.go:247] addon metrics-server should already be in state true
	I0908 11:58:06.940050  812159 config.go:182] Loaded profile config "newest-cni-549052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:58:06.940088  812159 host.go:66] Checking if "newest-cni-549052" exists ...
	I0908 11:58:06.939980  812159 host.go:66] Checking if "newest-cni-549052" exists ...
	I0908 11:58:06.940490  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.940504  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.939977  812159 addons.go:69] Setting dashboard=true in profile "newest-cni-549052"
	I0908 11:58:06.940524  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.940526  812159 addons.go:238] Setting addon dashboard=true in "newest-cni-549052"
	W0908 11:58:06.940537  812159 addons.go:247] addon dashboard should already be in state true
	I0908 11:58:06.940557  812159 host.go:66] Checking if "newest-cni-549052" exists ...
	I0908 11:58:06.940557  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.940596  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.940720  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.940872  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.940971  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.941152  812159 out.go:179] * Verifying Kubernetes components...
	I0908 11:58:06.942460  812159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:58:06.960366  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0908 11:58:06.960526  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0908 11:58:06.960592  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0908 11:58:06.961045  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.961055  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.961093  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33949
	I0908 11:58:06.961132  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.961638  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.961661  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.961738  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.961758  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.961821  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.961839  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.961841  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.962218  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.962258  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.962276  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.962358  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.962367  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.962834  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.962836  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.962872  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.962886  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.963106  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.963156  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:06.963591  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.963624  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.966128  812159 addons.go:238] Setting addon default-storageclass=true in "newest-cni-549052"
	W0908 11:58:06.966146  812159 addons.go:247] addon default-storageclass should already be in state true
	I0908 11:58:06.966174  812159 host.go:66] Checking if "newest-cni-549052" exists ...
	I0908 11:58:06.966448  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.966471  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.982657  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0908 11:58:06.983247  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.983304  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0908 11:58:06.984118  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.984140  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.984338  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.984433  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I0908 11:58:06.984604  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.984817  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:06.984922  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.984992  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.985062  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.985634  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.985823  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:06.986269  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.986294  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.986675  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0908 11:58:06.987058  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:06.987115  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.987598  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:06.987626  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:06.987711  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:06.988003  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:06.988697  812159 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:06.988741  812159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:06.989378  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:58:06.989781  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:58:06.990188  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:58:06.990896  812159 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 11:58:06.991714  812159 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:58:06.991751  812159 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 11:58:06.992503  812159 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 11:58:06.992518  812159 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 11:58:06.992539  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:58:06.993213  812159 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:58:06.993260  812159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:58:06.993282  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:58:06.994155  812159 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 11:58:04.307776  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:04.308591  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:58:04.308621  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:58:04.308482  812639 retry.go:31] will retry after 3.169091318s: waiting for domain to come up
	I0908 11:58:07.481840  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:07.482455  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | unable to find current IP address of domain default-k8s-diff-port-149795 in network mk-default-k8s-diff-port-149795
	I0908 11:58:07.482485  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | I0908 11:58:07.482426  812639 retry.go:31] will retry after 4.873827649s: waiting for domain to come up
	I0908 11:58:06.995651  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 11:58:06.995667  812159 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 11:58:06.995682  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:58:06.996803  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:06.997568  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:58:06.997599  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:06.998621  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:06.998653  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:58:06.998668  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:58:06.998726  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:06.998826  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:58:06.998890  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:58:06.998952  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:58:06.999129  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:58:06.999252  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:58:06.999398  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:58:06.999420  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:58:06.999766  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:07.000335  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:58:07.000379  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:07.000623  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:58:07.000819  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:58:07.001009  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:58:07.001150  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:58:07.027364  812159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0908 11:58:07.027938  812159 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:07.028543  812159 main.go:141] libmachine: Using API Version  1
	I0908 11:58:07.028576  812159 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:07.029074  812159 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:07.029397  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetState
	I0908 11:58:07.031401  812159 main.go:141] libmachine: (newest-cni-549052) Calling .DriverName
	I0908 11:58:07.031636  812159 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:58:07.031654  812159 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:58:07.031674  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHHostname
	I0908 11:58:07.034636  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:07.035102  812159 main.go:141] libmachine: (newest-cni-549052) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:55:ce", ip: ""} in network mk-newest-cni-549052: {Iface:virbr4 ExpiryTime:2025-09-08 12:57:39 +0000 UTC Type:0 Mac:52:54:00:c8:55:ce Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:newest-cni-549052 Clientid:01:52:54:00:c8:55:ce}
	I0908 11:58:07.035128  812159 main.go:141] libmachine: (newest-cni-549052) DBG | domain newest-cni-549052 has defined IP address 192.168.72.253 and MAC address 52:54:00:c8:55:ce in network mk-newest-cni-549052
	I0908 11:58:07.035453  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHPort
	I0908 11:58:07.035616  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHKeyPath
	I0908 11:58:07.035764  812159 main.go:141] libmachine: (newest-cni-549052) Calling .GetSSHUsername
	I0908 11:58:07.035886  812159 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/newest-cni-549052/id_rsa Username:docker}
	I0908 11:58:07.316120  812159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:58:07.341349  812159 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:58:07.341448  812159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:07.379877  812159 api_server.go:72] duration metric: took 440.130445ms to wait for apiserver process to appear ...
	I0908 11:58:07.379918  812159 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:58:07.379944  812159 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8443/healthz ...
	I0908 11:58:07.394846  812159 api_server.go:279] https://192.168.72.253:8443/healthz returned 200:
	ok
	I0908 11:58:07.396315  812159 api_server.go:141] control plane version: v1.34.0
	I0908 11:58:07.396353  812159 api_server.go:131] duration metric: took 16.426352ms to wait for apiserver health ...
	I0908 11:58:07.396369  812159 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:58:07.403249  812159 system_pods.go:59] 8 kube-system pods found
	I0908 11:58:07.403277  812159 system_pods.go:61] "coredns-66bc5c9577-k9fz2" [56b6d720-5155-4da7-b02f-fd0a70f84b08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:07.403284  812159 system_pods.go:61] "etcd-newest-cni-549052" [6f3da5eb-9f20-4a77-b5c4-62c1b9a274d7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:07.403294  812159 system_pods.go:61] "kube-apiserver-newest-cni-549052" [b4dfa754-a3c0-4462-a3e9-e8c9826f82b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:07.403300  812159 system_pods.go:61] "kube-controller-manager-newest-cni-549052" [4527e053-5868-4788-a6d8-09c02292d1a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:07.403304  812159 system_pods.go:61] "kube-proxy-n9kwb" [9d23138f-39c4-4ffa-8e33-e2f0eaea4051] Running
	I0908 11:58:07.403310  812159 system_pods.go:61] "kube-scheduler-newest-cni-549052" [376f1898-d179-4213-82dc-6eb522068d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:07.403318  812159 system_pods.go:61] "metrics-server-746fcd58dc-4jzrw" [295f1a4b-153b-4b9c-bfb1-38a153f63a87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:07.403322  812159 system_pods.go:61] "storage-provisioner" [5e5d3e8c-59de-401f-bb0a-4ab29de93cdf] Running
	I0908 11:58:07.403327  812159 system_pods.go:74] duration metric: took 6.945833ms to wait for pod list to return data ...
	I0908 11:58:07.403335  812159 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:58:07.408141  812159 default_sa.go:45] found service account: "default"
	I0908 11:58:07.408173  812159 default_sa.go:55] duration metric: took 4.831114ms for default service account to be created ...
	I0908 11:58:07.408192  812159 kubeadm.go:578] duration metric: took 468.452171ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0908 11:58:07.408219  812159 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:58:07.413477  812159 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:58:07.413502  812159 node_conditions.go:123] node cpu capacity is 2
	I0908 11:58:07.413517  812159 node_conditions.go:105] duration metric: took 5.291281ms to run NodePressure ...
	I0908 11:58:07.413533  812159 start.go:241] waiting for startup goroutines ...
	I0908 11:58:07.585717  812159 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 11:58:07.585746  812159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 11:58:07.590627  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 11:58:07.590649  812159 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 11:58:07.615471  812159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:58:07.617433  812159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:58:07.637921  812159 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 11:58:07.637960  812159 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 11:58:07.658519  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 11:58:07.658558  812159 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 11:58:07.716711  812159 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:58:07.716747  812159 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 11:58:07.741200  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 11:58:07.741272  812159 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 11:58:07.784300  812159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:58:07.838089  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 11:58:07.838113  812159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 11:58:07.916483  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 11:58:07.916517  812159 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 11:58:07.994434  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 11:58:07.994467  812159 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 11:58:08.034252  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:08.034281  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:08.034576  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:08.034596  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:08.034606  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:08.034614  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:08.034890  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:08.034913  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:08.034914  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Closing plugin on server side
	I0908 11:58:08.051590  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:08.051616  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:08.051947  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Closing plugin on server side
	I0908 11:58:08.052003  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:08.052014  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:08.077806  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 11:58:08.077837  812159 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 11:58:08.158787  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 11:58:08.158817  812159 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 11:58:08.222357  812159 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:58:08.222389  812159 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 11:58:08.273376  812159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:58:09.418444  812159 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.634102035s)
	I0908 11:58:09.418507  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.418522  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.418753  812159 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.801281927s)
	I0908 11:58:09.418793  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.418806  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.418823  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.418841  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.418854  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.418863  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.419030  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.419041  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.419057  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.419063  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.420923  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Closing plugin on server side
	I0908 11:58:09.420930  812159 main.go:141] libmachine: (newest-cni-549052) DBG | Closing plugin on server side
	I0908 11:58:09.420929  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.420954  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.420966  812159 addons.go:479] Verifying addon metrics-server=true in "newest-cni-549052"
	I0908 11:58:09.420929  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.421023  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.719509  812159 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.446049393s)
	I0908 11:58:09.719603  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.719621  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.719989  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.720009  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.720020  812159 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:09.720029  812159 main.go:141] libmachine: (newest-cni-549052) Calling .Close
	I0908 11:58:09.720303  812159 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:09.720337  812159 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:09.721883  812159 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-549052 addons enable metrics-server
	
	I0908 11:58:09.723171  812159 out.go:179] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0908 11:58:09.724322  812159 addons.go:514] duration metric: took 2.784529073s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0908 11:58:09.724366  812159 start.go:246] waiting for cluster config update ...
	I0908 11:58:09.724394  812159 start.go:255] writing updated cluster config ...
	I0908 11:58:09.724722  812159 ssh_runner.go:195] Run: rm -f paused
	I0908 11:58:09.776981  812159 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:58:09.778629  812159 out.go:179] * Done! kubectl is now configured to use "newest-cni-549052" cluster and "default" namespace by default
	I0908 11:58:12.358330  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.359376  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has current primary IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.359409  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) found domain IP: 192.168.39.109
	I0908 11:58:12.359424  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) reserving static IP address...
	I0908 11:58:12.359947  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-149795", mac: "52:54:00:92:f9:54", ip: "192.168.39.109"} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.359979  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | skip adding static IP to network mk-default-k8s-diff-port-149795 - found existing host DHCP lease matching {name: "default-k8s-diff-port-149795", mac: "52:54:00:92:f9:54", ip: "192.168.39.109"}
	I0908 11:58:12.359995  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Getting to WaitForSSH function...
	I0908 11:58:12.360170  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) reserved static IP address 192.168.39.109 for domain default-k8s-diff-port-149795
	I0908 11:58:12.360214  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) waiting for SSH...
	I0908 11:58:12.362949  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.363320  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.363348  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.363501  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Using SSH client type: external
	I0908 11:58:12.363526  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Using SSH private key: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa (-rw-------)
	I0908 11:58:12.363569  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 11:58:12.363582  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | About to run SSH command:
	I0908 11:58:12.363595  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | exit 0
	I0908 11:58:12.498804  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | SSH cmd err, output: <nil>: 
	I0908 11:58:12.499008  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetConfigRaw
	I0908 11:58:12.499663  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetIP
	I0908 11:58:12.502215  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.502705  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.502730  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.503040  812547 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/config.json ...
	I0908 11:58:12.505722  812547 machine.go:93] provisionDockerMachine start ...
	I0908 11:58:12.505749  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:12.505925  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:12.508541  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.508928  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.508949  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.509087  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:12.509266  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.509427  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.509590  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:12.509821  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:12.510143  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:12.510161  812547 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:58:12.622290  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 11:58:12.622341  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetMachineName
	I0908 11:58:12.622622  812547 buildroot.go:166] provisioning hostname "default-k8s-diff-port-149795"
	I0908 11:58:12.622653  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetMachineName
	I0908 11:58:12.622830  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:12.626479  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.627086  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.627120  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.627378  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:12.627571  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.627800  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.628005  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:12.628187  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:12.628461  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:12.628479  812547 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-149795 && echo "default-k8s-diff-port-149795" | sudo tee /etc/hostname
	I0908 11:58:12.763725  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-149795
	
	I0908 11:58:12.763757  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:12.767404  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.767913  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.767939  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:12.767953  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.768136  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.768258  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:12.768348  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:12.768475  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:12.768760  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:12.768789  812547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-149795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-149795/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-149795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:58:12.898758  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:58:12.898806  812547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21503-748170/.minikube CaCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21503-748170/.minikube}
	I0908 11:58:12.898834  812547 buildroot.go:174] setting up certificates
	I0908 11:58:12.898847  812547 provision.go:84] configureAuth start
	I0908 11:58:12.898860  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetMachineName
	I0908 11:58:12.899183  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetIP
	I0908 11:58:12.902652  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.903213  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.903270  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.903577  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:12.906329  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.906718  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:12.906750  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:12.906913  812547 provision.go:143] copyHostCerts
	I0908 11:58:12.906986  812547 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem, removing ...
	I0908 11:58:12.907006  812547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem
	I0908 11:58:12.907087  812547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/cert.pem (1123 bytes)
	I0908 11:58:12.907208  812547 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem, removing ...
	I0908 11:58:12.907219  812547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem
	I0908 11:58:12.907251  812547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/key.pem (1675 bytes)
	I0908 11:58:12.907328  812547 exec_runner.go:144] found /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem, removing ...
	I0908 11:58:12.907337  812547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem
	I0908 11:58:12.907365  812547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21503-748170/.minikube/ca.pem (1078 bytes)
	I0908 11:58:12.907442  812547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-149795 san=[127.0.0.1 192.168.39.109 default-k8s-diff-port-149795 localhost minikube]
	I0908 11:58:13.071967  812547 provision.go:177] copyRemoteCerts
	I0908 11:58:13.072035  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:58:13.072063  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.075619  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.076095  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.076133  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.076317  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.076518  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.076702  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.076862  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:13.166933  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 11:58:13.208896  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0908 11:58:13.249356  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 11:58:13.288748  812547 provision.go:87] duration metric: took 389.88528ms to configureAuth
	I0908 11:58:13.288777  812547 buildroot.go:189] setting minikube options for container-runtime
	I0908 11:58:13.289019  812547 config.go:182] Loaded profile config "default-k8s-diff-port-149795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:58:13.289136  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.292902  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.293282  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.293301  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.293603  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.293758  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.293867  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.293971  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.294218  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:13.294511  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:13.294536  812547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:58:13.578099  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:58:13.578130  812547 machine.go:96] duration metric: took 1.072389547s to provisionDockerMachine
	I0908 11:58:13.578143  812547 start.go:293] postStartSetup for "default-k8s-diff-port-149795" (driver="kvm2")
	I0908 11:58:13.578163  812547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:58:13.578195  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.578523  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:58:13.578555  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.581884  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.582293  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.582318  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.582623  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.582829  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.582964  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.583073  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:13.677522  812547 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:58:13.683800  812547 info.go:137] Remote host: Buildroot 2025.02
	I0908 11:58:13.683830  812547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/addons for local assets ...
	I0908 11:58:13.683893  812547 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-748170/.minikube/files for local assets ...
	I0908 11:58:13.683994  812547 filesync.go:149] local asset: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem -> 7523322.pem in /etc/ssl/certs
	I0908 11:58:13.684099  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 11:58:13.699967  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:58:13.734189  812547 start.go:296] duration metric: took 156.025518ms for postStartSetup
	I0908 11:58:13.734269  812547 fix.go:56] duration metric: took 24.4988088s for fixHost
	I0908 11:58:13.734304  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.737252  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.737721  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.737765  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.737924  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.738142  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.738352  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.738501  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.738672  812547 main.go:141] libmachine: Using SSH client type: native
	I0908 11:58:13.738981  812547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0908 11:58:13.738998  812547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 11:58:13.847065  812547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757332693.825814239
	
	I0908 11:58:13.847088  812547 fix.go:216] guest clock: 1757332693.825814239
	I0908 11:58:13.847097  812547 fix.go:229] Guest: 2025-09-08 11:58:13.825814239 +0000 UTC Remote: 2025-09-08 11:58:13.734277311 +0000 UTC m=+30.887797732 (delta=91.536928ms)
	I0908 11:58:13.847137  812547 fix.go:200] guest clock delta is within tolerance: 91.536928ms
	I0908 11:58:13.847150  812547 start.go:83] releasing machines lock for "default-k8s-diff-port-149795", held for 24.611794175s
	I0908 11:58:13.847177  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.847472  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetIP
	I0908 11:58:13.850596  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.851119  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.851148  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.851259  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.851760  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.851935  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:13.852032  812547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:58:13.852080  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.852129  812547 ssh_runner.go:195] Run: cat /version.json
	I0908 11:58:13.852165  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:13.855506  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.856015  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.856048  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.856212  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.856295  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.856431  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.856585  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:13.856606  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.856611  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:13.856769  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:13.857124  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:13.857316  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:13.857506  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:13.857675  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:13.972413  812547 ssh_runner.go:195] Run: systemctl --version
	I0908 11:58:13.979046  812547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:58:14.131264  812547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 11:58:14.139064  812547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 11:58:14.139130  812547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:58:14.161596  812547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 11:58:14.161624  812547 start.go:495] detecting cgroup driver to use...
	I0908 11:58:14.161704  812547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:58:14.186106  812547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:58:14.206991  812547 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:58:14.207044  812547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:58:14.224613  812547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:58:14.240993  812547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:58:14.395059  812547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:58:14.544624  812547 docker.go:234] disabling docker service ...
	I0908 11:58:14.544705  812547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:58:14.561141  812547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:58:14.575967  812547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:58:14.792330  812547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:58:14.967740  812547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:58:14.985611  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:58:15.014490  812547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 11:58:15.014562  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.028896  812547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:58:15.028950  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.043193  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.055922  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.070269  812547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:58:15.084368  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.096945  812547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.122776  812547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:58:15.140358  812547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:58:15.151464  812547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 11:58:15.151540  812547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 11:58:15.175536  812547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:58:15.188179  812547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:58:15.351293  812547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:58:15.477833  812547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:58:15.477924  812547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:58:15.483478  812547 start.go:563] Will wait 60s for crictl version
	I0908 11:58:15.483545  812547 ssh_runner.go:195] Run: which crictl
	I0908 11:58:15.487576  812547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:58:15.531589  812547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 11:58:15.531724  812547 ssh_runner.go:195] Run: crio --version
	I0908 11:58:15.561931  812547 ssh_runner.go:195] Run: crio --version
	I0908 11:58:15.591994  812547 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 11:58:15.593170  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetIP
	I0908 11:58:15.595787  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:15.596129  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:15.596156  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:15.596409  812547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 11:58:15.601047  812547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:58:15.615387  812547 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-149795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netwo
rk: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:58:15.615514  812547 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:58:15.615556  812547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:58:15.654367  812547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0908 11:58:15.654438  812547 ssh_runner.go:195] Run: which lz4
	I0908 11:58:15.658898  812547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 11:58:15.664068  812547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 11:58:15.664125  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0908 11:58:17.338625  812547 crio.go:462] duration metric: took 1.679773351s to copy over tarball
	I0908 11:58:17.338725  812547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 11:58:18.982976  812547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.644214726s)
	I0908 11:58:18.983007  812547 crio.go:469] duration metric: took 1.644347643s to extract the tarball
	I0908 11:58:18.983016  812547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 11:58:19.023691  812547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:58:19.076722  812547 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:58:19.076766  812547 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:58:19.076778  812547 kubeadm.go:926] updating node { 192.168.39.109 8444 v1.34.0 crio true true} ...
	I0908 11:58:19.076916  812547 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-149795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:58:19.077003  812547 ssh_runner.go:195] Run: crio config
	I0908 11:58:19.126661  812547 cni.go:84] Creating CNI manager for ""
	I0908 11:58:19.126689  812547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:58:19.126704  812547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:58:19.126734  812547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-149795 NodeName:default-k8s-diff-port-149795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:58:19.126935  812547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-149795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.109"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:58:19.127023  812547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:58:19.139068  812547 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:58:19.139136  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:58:19.150689  812547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0908 11:58:19.170879  812547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:58:19.192958  812547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I0908 11:58:19.217642  812547 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I0908 11:58:19.221959  812547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:58:19.238620  812547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:58:19.396396  812547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:58:19.439673  812547 certs.go:68] Setting up /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795 for IP: 192.168.39.109
	I0908 11:58:19.439697  812547 certs.go:194] generating shared ca certs ...
	I0908 11:58:19.439714  812547 certs.go:226] acquiring lock for ca certs: {Name:mkaa8fe7cb1fe9bdb745b85589d42151c557e20e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:19.439877  812547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key
	I0908 11:58:19.439927  812547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key
	I0908 11:58:19.439943  812547 certs.go:256] generating profile certs ...
	I0908 11:58:19.440053  812547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/client.key
	I0908 11:58:19.440151  812547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/apiserver.key.0ed28a76
	I0908 11:58:19.440207  812547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/proxy-client.key
	I0908 11:58:19.440370  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem (1338 bytes)
	W0908 11:58:19.440412  812547 certs.go:480] ignoring /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332_empty.pem, impossibly tiny 0 bytes
	I0908 11:58:19.440426  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 11:58:19.440459  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/ca.pem (1078 bytes)
	I0908 11:58:19.440488  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:58:19.440525  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/certs/key.pem (1675 bytes)
	I0908 11:58:19.440584  812547 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem (1708 bytes)
	I0908 11:58:19.441283  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:58:19.482073  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 11:58:19.515402  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:58:19.544994  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 11:58:19.573132  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0908 11:58:19.601356  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:58:19.629021  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:58:19.656705  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/default-k8s-diff-port-149795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 11:58:19.684332  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/ssl/certs/7523322.pem --> /usr/share/ca-certificates/7523322.pem (1708 bytes)
	I0908 11:58:19.711799  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:58:19.738871  812547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-748170/.minikube/certs/752332.pem --> /usr/share/ca-certificates/752332.pem (1338 bytes)
	I0908 11:58:19.766478  812547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:58:19.785771  812547 ssh_runner.go:195] Run: openssl version
	I0908 11:58:19.791962  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:58:19.804523  812547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:58:19.809633  812547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:58:19.809703  812547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:58:19.816669  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:58:19.829968  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/752332.pem && ln -fs /usr/share/ca-certificates/752332.pem /etc/ssl/certs/752332.pem"
	I0908 11:58:19.842724  812547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/752332.pem
	I0908 11:58:19.847572  812547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 10:41 /usr/share/ca-certificates/752332.pem
	I0908 11:58:19.847629  812547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/752332.pem
	I0908 11:58:19.854389  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/752332.pem /etc/ssl/certs/51391683.0"
	I0908 11:58:19.867172  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7523322.pem && ln -fs /usr/share/ca-certificates/7523322.pem /etc/ssl/certs/7523322.pem"
	I0908 11:58:19.879993  812547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7523322.pem
	I0908 11:58:19.885001  812547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 10:41 /usr/share/ca-certificates/7523322.pem
	I0908 11:58:19.885048  812547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7523322.pem
	I0908 11:58:19.892243  812547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7523322.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:58:19.905223  812547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:58:19.910251  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 11:58:19.917394  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 11:58:19.924306  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 11:58:19.931255  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 11:58:19.938133  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 11:58:19.945452  812547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 11:58:19.952120  812547 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-149795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.0 ClusterName:default-k8s-diff-port-149795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:58:19.952206  812547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:58:19.952253  812547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:58:19.992645  812547 cri.go:89] found id: ""
	I0908 11:58:19.992735  812547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:58:20.004767  812547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 11:58:20.004788  812547 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 11:58:20.004835  812547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 11:58:20.016636  812547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:58:20.017104  812547 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-149795" does not appear in /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:58:20.017241  812547 kubeconfig.go:62] /home/jenkins/minikube-integration/21503-748170/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-149795" cluster setting kubeconfig missing "default-k8s-diff-port-149795" context setting]
	I0908 11:58:20.019484  812547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:20.020729  812547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 11:58:20.032051  812547 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.109
	I0908 11:58:20.032090  812547 kubeadm.go:1152] stopping kube-system containers ...
	I0908 11:58:20.032104  812547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0908 11:58:20.032159  812547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:58:20.071745  812547 cri.go:89] found id: ""
	I0908 11:58:20.071812  812547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 11:58:20.090648  812547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:58:20.102580  812547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:58:20.102609  812547 kubeadm.go:157] found existing configuration files:
	
	I0908 11:58:20.102677  812547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0908 11:58:20.113717  812547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:58:20.113780  812547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:58:20.125456  812547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0908 11:58:20.135984  812547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:58:20.136051  812547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:58:20.147670  812547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0908 11:58:20.158731  812547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:58:20.158799  812547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:58:20.169704  812547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0908 11:58:20.180220  812547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:58:20.180281  812547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:58:20.192722  812547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:58:20.204082  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:20.259335  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:21.700749  812547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.441363109s)
	I0908 11:58:21.700803  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:21.935881  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:22.004530  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:22.080673  812547 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:58:22.080790  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:22.581458  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:23.081324  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:23.581927  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:23.617828  812547 api_server.go:72] duration metric: took 1.537159124s to wait for apiserver process to appear ...
	I0908 11:58:23.617858  812547 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:58:23.617884  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:23.618456  812547 api_server.go:269] stopped: https://192.168.39.109:8444/healthz: Get "https://192.168.39.109:8444/healthz": dial tcp 192.168.39.109:8444: connect: connection refused
	I0908 11:58:24.118130  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:26.273974  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:58:26.274003  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:58:26.274018  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:26.300983  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:58:26.301008  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:58:26.618533  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:26.623470  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:26.623497  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:27.118139  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:27.126489  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:27.126527  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:27.618153  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:27.625893  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:27.625929  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:28.118785  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:28.127432  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:58:28.127481  812547 api_server.go:103] status: https://192.168.39.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:58:28.618109  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:28.622835  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 200:
	ok
	I0908 11:58:28.629247  812547 api_server.go:141] control plane version: v1.34.0
	I0908 11:58:28.629275  812547 api_server.go:131] duration metric: took 5.0114057s to wait for apiserver health ...
	I0908 11:58:28.629288  812547 cni.go:84] Creating CNI manager for ""
	I0908 11:58:28.629298  812547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 11:58:28.630982  812547 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 11:58:28.632061  812547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 11:58:28.644944  812547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 11:58:28.665882  812547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:58:28.671195  812547 system_pods.go:59] 8 kube-system pods found
	I0908 11:58:28.671246  812547 system_pods.go:61] "coredns-66bc5c9577-8bmsd" [31101ce9-d6dc-4f5b-ad19-555dc9e29a68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:28.671257  812547 system_pods.go:61] "etcd-default-k8s-diff-port-149795" [dfeca0dc-2ca7-4732-856f-426cbd0d7f0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:28.671265  812547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149795" [b8bade15-4ae8-461f-af77-cd65e48e34c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:28.671277  812547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149795" [2c6f4438-958a-4549-8c1d-98ac9429cf5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:28.671286  812547 system_pods.go:61] "kube-proxy-vmsg4" [91462068-fe67-4ff4-b9db-f7016960ab40] Running
	I0908 11:58:28.671299  812547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149795" [60f180e7-5cf2-487b-b6c8-fe985b5832a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:28.671307  812547 system_pods.go:61] "metrics-server-746fcd58dc-6hdsd" [c9e0e26f-f05a-4d6d-979b-711c4381d179] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:28.671317  812547 system_pods.go:61] "storage-provisioner" [0cb21d0b-e87b-4223-ab66-fb22e49c358a] Running
	I0908 11:58:28.671325  812547 system_pods.go:74] duration metric: took 5.418412ms to wait for pod list to return data ...
	I0908 11:58:28.671335  812547 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:58:28.677939  812547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:58:28.677969  812547 node_conditions.go:123] node cpu capacity is 2
	I0908 11:58:28.677984  812547 node_conditions.go:105] duration metric: took 6.639345ms to run NodePressure ...
	I0908 11:58:28.678005  812547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:58:28.934748  812547 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 11:58:28.938318  812547 kubeadm.go:735] kubelet initialised
	I0908 11:58:28.938338  812547 kubeadm.go:736] duration metric: took 3.563579ms waiting for restarted kubelet to initialise ...
	I0908 11:58:28.938355  812547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:58:28.954028  812547 ops.go:34] apiserver oom_adj: -16
	I0908 11:58:28.954068  812547 kubeadm.go:593] duration metric: took 8.949272474s to restartPrimaryControlPlane
	I0908 11:58:28.954080  812547 kubeadm.go:394] duration metric: took 9.001966386s to StartCluster
	I0908 11:58:28.954118  812547 settings.go:142] acquiring lock: {Name:mk18c67e9470bbfdfeaf7a5d3ce5d7a1813bc966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:28.954212  812547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:58:28.954815  812547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-748170/kubeconfig: {Name:mk78ced2572c8fbe21fb139deb9ae019703be092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:58:28.955039  812547 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.109 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:58:28.955128  812547 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 11:58:28.955248  812547 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-149795"
	I0908 11:58:28.955269  812547 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-149795"
	W0908 11:58:28.955282  812547 addons.go:247] addon storage-provisioner should already be in state true
	I0908 11:58:28.955290  812547 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-149795"
	I0908 11:58:28.955301  812547 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-149795"
	I0908 11:58:28.955319  812547 host.go:66] Checking if "default-k8s-diff-port-149795" exists ...
	I0908 11:58:28.955310  812547 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-149795"
	I0908 11:58:28.955344  812547 config.go:182] Loaded profile config "default-k8s-diff-port-149795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:58:28.955362  812547 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-149795"
	W0908 11:58:28.955374  812547 addons.go:247] addon dashboard should already be in state true
	I0908 11:58:28.955411  812547 host.go:66] Checking if "default-k8s-diff-port-149795" exists ...
	I0908 11:58:28.955318  812547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-149795"
	I0908 11:58:28.955340  812547 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-149795"
	W0908 11:58:28.955584  812547 addons.go:247] addon metrics-server should already be in state true
	I0908 11:58:28.955609  812547 host.go:66] Checking if "default-k8s-diff-port-149795" exists ...
	I0908 11:58:28.955734  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.955764  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.955786  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.955837  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.955852  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.955885  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.956004  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.956049  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.957327  812547 out.go:179] * Verifying Kubernetes components...
	I0908 11:58:28.958547  812547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:58:28.971865  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0908 11:58:28.971874  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0908 11:58:28.972181  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38493
	I0908 11:58:28.972350  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.972368  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.972565  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.972809  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.972835  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.972984  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.973004  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.972990  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.973034  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.973240  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.973466  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.973489  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.973653  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:28.973853  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.973905  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.974062  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.974100  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.974916  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0908 11:58:28.975397  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.975910  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.975926  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.976301  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.976714  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.976744  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.976717  812547 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-149795"
	W0908 11:58:28.976820  812547 addons.go:247] addon default-storageclass should already be in state true
	I0908 11:58:28.976854  812547 host.go:66] Checking if "default-k8s-diff-port-149795" exists ...
	I0908 11:58:28.988936  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0908 11:58:28.989336  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.989713  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:28.989765  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:28.989837  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.989859  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.990234  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.990470  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:28.992342  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:28.992779  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36937
	I0908 11:58:28.993333  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:28.993844  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:28.993873  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:28.994135  812547 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 11:58:28.994237  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:28.994443  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:28.995212  812547 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 11:58:28.995234  812547 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 11:58:28.995254  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:28.996295  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:28.997602  812547 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:58:28.998603  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:28.998772  812547 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:58:28.998788  812547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:58:28.998807  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:28.999096  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:28.999118  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:28.999285  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:28.999462  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:28.999598  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:28.999765  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:29.002370  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.002836  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:29.002866  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.003029  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:29.003208  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:29.003387  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:29.003526  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:29.008162  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I0908 11:58:29.008693  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:29.009217  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:29.009244  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:29.009599  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:29.010166  812547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:58:29.010208  812547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:58:29.010651  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0908 11:58:29.011117  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:29.011609  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:29.011629  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:29.011905  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:29.012079  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:29.013931  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:29.015810  812547 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 11:58:29.016977  812547 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 11:58:29.017893  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 11:58:29.017915  812547 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 11:58:29.017938  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:29.020860  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.021318  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:29.021351  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.021621  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:29.021805  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:29.021972  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:29.022245  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:29.027796  812547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38233
	I0908 11:58:29.028199  812547 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:58:29.028627  812547 main.go:141] libmachine: Using API Version  1
	I0908 11:58:29.028648  812547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:58:29.029265  812547 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:58:29.029462  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetState
	I0908 11:58:29.030872  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .DriverName
	I0908 11:58:29.031091  812547 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:58:29.031107  812547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:58:29.031124  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHHostname
	I0908 11:58:29.034015  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.034450  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f9:54", ip: ""} in network mk-default-k8s-diff-port-149795: {Iface:virbr3 ExpiryTime:2025-09-08 12:58:04 +0000 UTC Type:0 Mac:52:54:00:92:f9:54 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:default-k8s-diff-port-149795 Clientid:01:52:54:00:92:f9:54}
	I0908 11:58:29.034479  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | domain default-k8s-diff-port-149795 has defined IP address 192.168.39.109 and MAC address 52:54:00:92:f9:54 in network mk-default-k8s-diff-port-149795
	I0908 11:58:29.034660  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHPort
	I0908 11:58:29.034817  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHKeyPath
	I0908 11:58:29.035001  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .GetSSHUsername
	I0908 11:58:29.035143  812547 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/default-k8s-diff-port-149795/id_rsa Username:docker}
	I0908 11:58:29.229146  812547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:58:29.266019  812547 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-149795" to be "Ready" ...
	I0908 11:58:29.270153  812547 node_ready.go:49] node "default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:29.270177  812547 node_ready.go:38] duration metric: took 4.120803ms for node "default-k8s-diff-port-149795" to be "Ready" ...
	I0908 11:58:29.270191  812547 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:58:29.270237  812547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:58:29.338415  812547 api_server.go:72] duration metric: took 383.332533ms to wait for apiserver process to appear ...
	I0908 11:58:29.338456  812547 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:58:29.338482  812547 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8444/healthz ...
	I0908 11:58:29.348820  812547 api_server.go:279] https://192.168.39.109:8444/healthz returned 200:
	ok
	I0908 11:58:29.351010  812547 api_server.go:141] control plane version: v1.34.0
	I0908 11:58:29.351041  812547 api_server.go:131] duration metric: took 12.575791ms to wait for apiserver health ...
	I0908 11:58:29.351053  812547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:58:29.374273  812547 system_pods.go:59] 8 kube-system pods found
	I0908 11:58:29.374328  812547 system_pods.go:61] "coredns-66bc5c9577-8bmsd" [31101ce9-d6dc-4f5b-ad19-555dc9e29a68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:29.374344  812547 system_pods.go:61] "etcd-default-k8s-diff-port-149795" [dfeca0dc-2ca7-4732-856f-426cbd0d7f0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:29.374357  812547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149795" [b8bade15-4ae8-461f-af77-cd65e48e34c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:29.374369  812547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149795" [2c6f4438-958a-4549-8c1d-98ac9429cf5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:29.374376  812547 system_pods.go:61] "kube-proxy-vmsg4" [91462068-fe67-4ff4-b9db-f7016960ab40] Running
	I0908 11:58:29.374388  812547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149795" [60f180e7-5cf2-487b-b6c8-fe985b5832a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:29.374396  812547 system_pods.go:61] "metrics-server-746fcd58dc-6hdsd" [c9e0e26f-f05a-4d6d-979b-711c4381d179] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:29.374400  812547 system_pods.go:61] "storage-provisioner" [0cb21d0b-e87b-4223-ab66-fb22e49c358a] Running
	I0908 11:58:29.374409  812547 system_pods.go:74] duration metric: took 23.347252ms to wait for pod list to return data ...
	I0908 11:58:29.374419  812547 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:58:29.384255  812547 default_sa.go:45] found service account: "default"
	I0908 11:58:29.384285  812547 default_sa.go:55] duration metric: took 9.859516ms for default service account to be created ...
	I0908 11:58:29.384294  812547 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:58:29.404022  812547 system_pods.go:86] 8 kube-system pods found
	I0908 11:58:29.404098  812547 system_pods.go:89] "coredns-66bc5c9577-8bmsd" [31101ce9-d6dc-4f5b-ad19-555dc9e29a68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:58:29.404114  812547 system_pods.go:89] "etcd-default-k8s-diff-port-149795" [dfeca0dc-2ca7-4732-856f-426cbd0d7f0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:58:29.404130  812547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149795" [b8bade15-4ae8-461f-af77-cd65e48e34c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:58:29.404143  812547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149795" [2c6f4438-958a-4549-8c1d-98ac9429cf5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:58:29.404150  812547 system_pods.go:89] "kube-proxy-vmsg4" [91462068-fe67-4ff4-b9db-f7016960ab40] Running
	I0908 11:58:29.404160  812547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149795" [60f180e7-5cf2-487b-b6c8-fe985b5832a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:58:29.404175  812547 system_pods.go:89] "metrics-server-746fcd58dc-6hdsd" [c9e0e26f-f05a-4d6d-979b-711c4381d179] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:58:29.404182  812547 system_pods.go:89] "storage-provisioner" [0cb21d0b-e87b-4223-ab66-fb22e49c358a] Running
	I0908 11:58:29.404194  812547 system_pods.go:126] duration metric: took 19.89185ms to wait for k8s-apps to be running ...
	I0908 11:58:29.404208  812547 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:58:29.404264  812547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:58:29.406926  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 11:58:29.406952  812547 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 11:58:29.417366  812547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:58:29.428033  812547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:58:29.475974  812547 system_svc.go:56] duration metric: took 71.758039ms WaitForService to wait for kubelet
	I0908 11:58:29.476005  812547 kubeadm.go:578] duration metric: took 520.932705ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:58:29.476023  812547 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:58:29.487222  812547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 11:58:29.487250  812547 node_conditions.go:123] node cpu capacity is 2
	I0908 11:58:29.487260  812547 node_conditions.go:105] duration metric: took 11.232529ms to run NodePressure ...
	I0908 11:58:29.487272  812547 start.go:241] waiting for startup goroutines ...
	I0908 11:58:29.498094  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 11:58:29.498126  812547 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 11:58:29.574478  812547 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 11:58:29.574506  812547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 11:58:29.629606  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 11:58:29.629644  812547 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 11:58:29.662865  812547 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 11:58:29.662906  812547 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 11:58:29.720290  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 11:58:29.720319  812547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 11:58:29.733183  812547 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:58:29.733214  812547 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 11:58:29.781759  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 11:58:29.781806  812547 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 11:58:29.806631  812547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:58:29.850357  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 11:58:29.850399  812547 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 11:58:29.922320  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 11:58:29.922357  812547 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 11:58:29.980722  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 11:58:29.980835  812547 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 11:58:30.031626  812547 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:58:30.031662  812547 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 11:58:30.070327  812547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 11:58:31.096390  812547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.668318659s)
	I0908 11:58:31.096454  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.096470  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.096824  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.096843  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.096855  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.096823  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.096861  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.097169  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.097191  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.097190  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.098919  812547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.681522913s)
	I0908 11:58:31.098952  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.098964  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.099250  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.099270  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.099282  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.099293  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.099303  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.099539  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.099559  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.099581  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.135776  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.135799  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.136173  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.136198  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.295619  812547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.488930548s)
	I0908 11:58:31.295702  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.295724  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.296071  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.296139  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.296148  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.296161  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.296169  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.296434  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.296452  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.296464  812547 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-149795"
	I0908 11:58:31.296487  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.732140  812547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.661704525s)
	I0908 11:58:31.732218  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.732238  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.732701  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.732720  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.732743  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) DBG | Closing plugin on server side
	I0908 11:58:31.732785  812547 main.go:141] libmachine: Making call to close driver server
	I0908 11:58:31.732846  812547 main.go:141] libmachine: (default-k8s-diff-port-149795) Calling .Close
	I0908 11:58:31.733100  812547 main.go:141] libmachine: Successfully made call to close driver server
	I0908 11:58:31.733118  812547 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 11:58:31.734877  812547 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-149795 addons enable metrics-server
	
	I0908 11:58:31.736134  812547 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 11:58:31.737368  812547 addons.go:514] duration metric: took 2.782255255s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 11:58:31.737411  812547 start.go:246] waiting for cluster config update ...
	I0908 11:58:31.737423  812547 start.go:255] writing updated cluster config ...
	I0908 11:58:31.737650  812547 ssh_runner.go:195] Run: rm -f paused
	I0908 11:58:31.743845  812547 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:58:31.750592  812547 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8bmsd" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 11:58:33.756566  812547 pod_ready.go:104] pod "coredns-66bc5c9577-8bmsd" is not "Ready", error: <nil>
	W0908 11:58:35.757629  812547 pod_ready.go:104] pod "coredns-66bc5c9577-8bmsd" is not "Ready", error: <nil>
	W0908 11:58:38.262814  812547 pod_ready.go:104] pod "coredns-66bc5c9577-8bmsd" is not "Ready", error: <nil>
	I0908 11:58:40.757349  812547 pod_ready.go:94] pod "coredns-66bc5c9577-8bmsd" is "Ready"
	I0908 11:58:40.757390  812547 pod_ready.go:86] duration metric: took 9.006768043s for pod "coredns-66bc5c9577-8bmsd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:40.760045  812547 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:40.764175  812547 pod_ready.go:94] pod "etcd-default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:40.764200  812547 pod_ready.go:86] duration metric: took 4.124516ms for pod "etcd-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:40.767140  812547 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:41.773282  812547 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:41.773309  812547 pod_ready.go:86] duration metric: took 1.006147457s for pod "kube-apiserver-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:41.776497  812547 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:41.781897  812547 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:41.781921  812547 pod_ready.go:86] duration metric: took 5.395768ms for pod "kube-controller-manager-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:41.956083  812547 pod_ready.go:83] waiting for pod "kube-proxy-vmsg4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:42.355763  812547 pod_ready.go:94] pod "kube-proxy-vmsg4" is "Ready"
	I0908 11:58:42.355797  812547 pod_ready.go:86] duration metric: took 399.683912ms for pod "kube-proxy-vmsg4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:42.555394  812547 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:42.955123  812547 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-149795" is "Ready"
	I0908 11:58:42.955153  812547 pod_ready.go:86] duration metric: took 399.731995ms for pod "kube-scheduler-default-k8s-diff-port-149795" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:58:42.955166  812547 pod_ready.go:40] duration metric: took 11.211288623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:58:42.998388  812547 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:58:43.000070  812547 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-149795" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.097060322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757333807097040305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80d40268-a6e7-4ba8-abdc-4c13c3160851 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.097605148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5331a5b1-6b15-4775-b66d-fffaa4750efd name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.097691539Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5331a5b1-6b15-4775-b66d-fffaa4750efd name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.097944313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:831b8e1914d6dfd31df3a5b00805f5c589419c96fa409985f0bd9b6ba3d8f18e,PodSandboxId:d9971d1b61c2b240442e860d28c75ed1876d6b74546e9ae4d1caca122e147b43,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1757333711119554611,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-r9vzn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2295a57e5d0f147bbdd47cb07012fadbe3fa31f4466b20fc874a981f413654bd,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757332738346959970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c944c5685dcbe3453a0762636a7e0bf9fb8fd84df73ff41e3f5354998844c36d,PodSandboxId:5d8bf6751a128d66acf89bc3aa31bac502c9ee3f9d5a79899995d52697862f0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757332717948289872,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7309204-a2be-4cc0-a01b-de13b6afd01e,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cc782e0ec2248d9b723af4f7a4aa589befe26da4d9ba49c275cdca6f74dec7,PodSandboxId:564fc335152637623fee614ca3c64e414252c6befca259213154629956993fd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757332711698979676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8bmsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31101ce9-d6dc-4f5b-ad19-555dc9e29a68,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:049e2bd82da59a081bdd6cc45be2ff080f311ffc832f781396eb9328ed93c742,PodSandboxId:5e9bb07b59271317bdf542b1520014ed4419ff83229a0b31f45558efa466ad57,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757332707597457007,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vmsg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91462068-fe67-4ff4-b9db-f7016960ab40,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d8c38b6064ace141c9fc470297bdad1b46cbfec17b7ed88917f4ed73e3f238,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1757332707569581970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132e0611e671809fe2004db5b204ebb98d88547afad9f17936178a3a61691d1e,PodSandboxId:005462b99d1e169d956b9dadfadd9eb59f72c050155b197c0cc1128de57e543c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1757332703435396044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109fa0cc69fc770844283f79b5fed2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e77a34bd3a0a4783768c9af6e275ac7573ac7ddf7e9ee8566e830d8fd7e512f,PodSandboxId:165028a9051cd2b786719b418a4b005bbdd2e13a735c5e98b3072dc90b72ff57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757332703392199783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a999c546c3cf243b5bc764b1c7bcc19d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c01a55b26f98d659cb84f0e01d507b4bbbb7a4657effe5cfee821bff3e8fca7,PodSandboxId:3cf30400a1f0bfe236266236c4096dad440b5bddd406c6efaa1ecc781decf975,Metadata:&ContainerMetadata{Name:etcd,Attempt
:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757332703383061482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d198d64ccda796e844cc7692cb87e41,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c17ee824d7f491d7c07c374ca2205434be4cf56242b857e2ad06e9f30a03ab,PodSandboxId:a243efcf1b53413fb9c3d
cce13b873c7ad6de31fa9ab9524f541e34a44d2f3ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757332703369716881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512eeffaafa40f337891a4fc086eef59,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=5331a5b1-6b15-4775-b66d-fffaa4750efd name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.107811443Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-08T11:58:31.502006160Z,kubernetes.io/config.source: api,},UserSpecifiedImage:,RuntimeHandler:,},Verbose:false,}" file="otel-collector/interceptors.go:62" id=cf845eb0-f015-4627-bf01-7745563f2f4f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.107933434Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" file="server/image_status.go:27" id=cf845eb0-f015-4627-bf01-7745563f2f4f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.108060954Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.108118668Z" level=debug msg="Can't find docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" file="server/image_status.go:97" id=cf845eb0-f015-4627-bf01-7745563f2f4f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.108143499Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" file="server/image_status.go:111" id=cf845eb0-f015-4627-bf01-7745563f2f4f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.108165143Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" file="server/image_status.go:33" id=cf845eb0-f015-4627-bf01-7745563f2f4f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.108195141Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=cf845eb0-f015-4627-bf01-7745563f2f4f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.133974444Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c270f1f-39c2-470e-8ce0-0212531973da name=/runtime.v1.RuntimeService/Version
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.134057863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c270f1f-39c2-470e-8ce0-0212531973da name=/runtime.v1.RuntimeService/Version
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.135121740Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ba5d236-d908-42b4-ade7-a7e3a9320ed5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.136339963Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757333807136260242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ba5d236-d908-42b4-ade7-a7e3a9320ed5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.137339069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2420b230-3175-49d5-87b8-e7a5bab3710b name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.137566107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2420b230-3175-49d5-87b8-e7a5bab3710b name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.138232944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:831b8e1914d6dfd31df3a5b00805f5c589419c96fa409985f0bd9b6ba3d8f18e,PodSandboxId:d9971d1b61c2b240442e860d28c75ed1876d6b74546e9ae4d1caca122e147b43,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1757333711119554611,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-r9vzn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2295a57e5d0f147bbdd47cb07012fadbe3fa31f4466b20fc874a981f413654bd,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757332738346959970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c944c5685dcbe3453a0762636a7e0bf9fb8fd84df73ff41e3f5354998844c36d,PodSandboxId:5d8bf6751a128d66acf89bc3aa31bac502c9ee3f9d5a79899995d52697862f0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757332717948289872,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7309204-a2be-4cc0-a01b-de13b6afd01e,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cc782e0ec2248d9b723af4f7a4aa589befe26da4d9ba49c275cdca6f74dec7,PodSandboxId:564fc335152637623fee614ca3c64e414252c6befca259213154629956993fd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757332711698979676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8bmsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31101ce9-d6dc-4f5b-ad19-555dc9e29a68,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:049e2bd82da59a081bdd6cc45be2ff080f311ffc832f781396eb9328ed93c742,PodSandboxId:5e9bb07b59271317bdf542b1520014ed4419ff83229a0b31f45558efa466ad57,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757332707597457007,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vmsg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91462068-fe67-4ff4-b9db-f7016960ab40,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d8c38b6064ace141c9fc470297bdad1b46cbfec17b7ed88917f4ed73e3f238,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1757332707569581970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132e0611e671809fe2004db5b204ebb98d88547afad9f17936178a3a61691d1e,PodSandboxId:005462b99d1e169d956b9dadfadd9eb59f72c050155b197c0cc1128de57e543c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1757332703435396044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109fa0cc69fc770844283f79b5fed2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e77a34bd3a0a4783768c9af6e275ac7573ac7ddf7e9ee8566e830d8fd7e512f,PodSandboxId:165028a9051cd2b786719b418a4b005bbdd2e13a735c5e98b3072dc90b72ff57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757332703392199783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a999c546c3cf243b5bc764b1c7bcc19d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c01a55b26f98d659cb84f0e01d507b4bbbb7a4657effe5cfee821bff3e8fca7,PodSandboxId:3cf30400a1f0bfe236266236c4096dad440b5bddd406c6efaa1ecc781decf975,Metadata:&ContainerMetadata{Name:etcd,Attempt
:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757332703383061482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d198d64ccda796e844cc7692cb87e41,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c17ee824d7f491d7c07c374ca2205434be4cf56242b857e2ad06e9f30a03ab,PodSandboxId:a243efcf1b53413fb9c3d
cce13b873c7ad6de31fa9ab9524f541e34a44d2f3ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757332703369716881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512eeffaafa40f337891a4fc086eef59,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=2420b230-3175-49d5-87b8-e7a5bab3710b name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.174456083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bbaa6f9-5874-4ad3-8b8e-bcf7886d0352 name=/runtime.v1.RuntimeService/Version
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.174522882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bbaa6f9-5874-4ad3-8b8e-bcf7886d0352 name=/runtime.v1.RuntimeService/Version
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.178874696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=383442b3-6e62-4a98-b5a3-d28a02e8721e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.179288851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757333807179269965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=383442b3-6e62-4a98-b5a3-d28a02e8721e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.179781727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae68342c-918f-4781-86a5-d91956109695 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.180210644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae68342c-918f-4781-86a5-d91956109695 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 12:16:47 default-k8s-diff-port-149795 crio[884]: time="2025-09-08 12:16:47.180682062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:831b8e1914d6dfd31df3a5b00805f5c589419c96fa409985f0bd9b6ba3d8f18e,PodSandboxId:d9971d1b61c2b240442e860d28c75ed1876d6b74546e9ae4d1caca122e147b43,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1757333711119554611,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-r9vzn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2295a57e5d0f147bbdd47cb07012fadbe3fa31f4466b20fc874a981f413654bd,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757332738346959970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c944c5685dcbe3453a0762636a7e0bf9fb8fd84df73ff41e3f5354998844c36d,PodSandboxId:5d8bf6751a128d66acf89bc3aa31bac502c9ee3f9d5a79899995d52697862f0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757332717948289872,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7309204-a2be-4cc0-a01b-de13b6afd01e,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cc782e0ec2248d9b723af4f7a4aa589befe26da4d9ba49c275cdca6f74dec7,PodSandboxId:564fc335152637623fee614ca3c64e414252c6befca259213154629956993fd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757332711698979676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8bmsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31101ce9-d6dc-4f5b-ad19-555dc9e29a68,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:049e2bd82da59a081bdd6cc45be2ff080f311ffc832f781396eb9328ed93c742,PodSandboxId:5e9bb07b59271317bdf542b1520014ed4419ff83229a0b31f45558efa466ad57,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757332707597457007,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vmsg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91462068-fe67-4ff4-b9db-f7016960ab40,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d8c38b6064ace141c9fc470297bdad1b46cbfec17b7ed88917f4ed73e3f238,PodSandboxId:e6e807891e561f2eaf19a281fc6b7d6c738e6fe93c47ab59c05c6740ba67abd6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1757332707569581970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb21d0b-e87b-4223-ab66-fb22e49c358a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:132e0611e671809fe2004db5b204ebb98d88547afad9f17936178a3a61691d1e,PodSandboxId:005462b99d1e169d956b9dadfadd9eb59f72c050155b197c0cc1128de57e543c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1757332703435396044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109fa0cc69fc770844283f79b5fed2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e77a34bd3a0a4783768c9af6e275ac7573ac7ddf7e9ee8566e830d8fd7e512f,PodSandboxId:165028a9051cd2b786719b418a4b005bbdd2e13a735c5e98b3072dc90b72ff57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757332703392199783,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a999c546c3cf243b5bc764b1c7bcc19d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c01a55b26f98d659cb84f0e01d507b4bbbb7a4657effe5cfee821bff3e8fca7,PodSandboxId:3cf30400a1f0bfe236266236c4096dad440b5bddd406c6efaa1ecc781decf975,Metadata:&ContainerMetadata{Name:etcd,Attempt
:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757332703383061482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d198d64ccda796e844cc7692cb87e41,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c17ee824d7f491d7c07c374ca2205434be4cf56242b857e2ad06e9f30a03ab,PodSandboxId:a243efcf1b53413fb9c3d
cce13b873c7ad6de31fa9ab9524f541e34a44d2f3ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757332703369716881,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-149795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512eeffaafa40f337891a4fc086eef59,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=ae68342c-918f-4781-86a5-d91956109695 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	831b8e1914d6d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      About a minute ago   Exited              dashboard-metrics-scraper   8                   d9971d1b61c2b       dashboard-metrics-scraper-6ffb444bf9-r9vzn
	2295a57e5d0f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago       Running             storage-provisioner         2                   e6e807891e561       storage-provisioner
	c944c5685dcbe       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago       Running             busybox                     1                   5d8bf6751a128       busybox
	f6cc782e0ec22       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      18 minutes ago       Running             coredns                     1                   564fc33515263       coredns-66bc5c9577-8bmsd
	049e2bd82da59       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      18 minutes ago       Running             kube-proxy                  1                   5e9bb07b59271       kube-proxy-vmsg4
	c1d8c38b6064a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago       Exited              storage-provisioner         1                   e6e807891e561       storage-provisioner
	132e0611e6718       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      18 minutes ago       Running             kube-controller-manager     1                   005462b99d1e1       kube-controller-manager-default-k8s-diff-port-149795
	5e77a34bd3a0a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      18 minutes ago       Running             kube-apiserver              1                   165028a9051cd       kube-apiserver-default-k8s-diff-port-149795
	3c01a55b26f98       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago       Running             etcd                        1                   3cf30400a1f0b       etcd-default-k8s-diff-port-149795
	34c17ee824d7f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      18 minutes ago       Running             kube-scheduler              1                   a243efcf1b534       kube-scheduler-default-k8s-diff-port-149795
	
	
	==> coredns [f6cc782e0ec2248d9b723af4f7a4aa589befe26da4d9ba49c275cdca6f74dec7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57732 - 1757 "HINFO IN 5940651371093740128.8074940744283301137. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012353843s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-149795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-149795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2
	                    minikube.k8s.io/name=default-k8s-diff-port-149795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_55_16_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:55:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-149795
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:16:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:13:57 +0000   Mon, 08 Sep 2025 11:55:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:13:57 +0000   Mon, 08 Sep 2025 11:55:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:13:57 +0000   Mon, 08 Sep 2025 11:55:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:13:57 +0000   Mon, 08 Sep 2025 11:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    default-k8s-diff-port-149795
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 dbabe12d88764d91a3177cf0fdd6c78d
	  System UUID:                dbabe12d-8876-4d91-a317-7cf0fdd6c78d
	  Boot ID:                    c7544f21-1a6f-4746-bab2-28225f8275e1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-8bmsd                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-default-k8s-diff-port-149795                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-149795             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-149795    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-vmsg4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-149795             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-6hdsd                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-r9vzn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-h5hcp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasSufficientPID
	  Normal   NodeReady                21m                kubelet          Node default-k8s-diff-port-149795 status is now: NodeReady
	  Normal   RegisteredNode           21m                node-controller  Node default-k8s-diff-port-149795 event: Registered Node default-k8s-diff-port-149795 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-149795 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18m                kubelet          Node default-k8s-diff-port-149795 has been rebooted, boot id: c7544f21-1a6f-4746-bab2-28225f8275e1
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-149795 event: Registered Node default-k8s-diff-port-149795 in Controller
	
	
	==> dmesg <==
	[Sep 8 11:57] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001847] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep 8 11:58] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.715268] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085475] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.099527] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.532355] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.731566] kauditd_printk_skb: 335 callbacks suppressed
	[ +20.399004] kauditd_printk_skb: 11 callbacks suppressed
	[Sep 8 11:59] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.063896] kauditd_printk_skb: 55 callbacks suppressed
	[ +20.688334] kauditd_printk_skb: 6 callbacks suppressed
	[Sep 8 12:00] kauditd_printk_skb: 6 callbacks suppressed
	[Sep 8 12:02] kauditd_printk_skb: 6 callbacks suppressed
	[Sep 8 12:04] kauditd_printk_skb: 6 callbacks suppressed
	[Sep 8 12:09] kauditd_printk_skb: 6 callbacks suppressed
	[Sep 8 12:15] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [3c01a55b26f98d659cb84f0e01d507b4bbbb7a4657effe5cfee821bff3e8fca7] <==
	{"level":"warn","ts":"2025-09-08T11:58:25.494997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.506011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.521218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.528564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.538705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.554909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.560677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.569574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.589913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.607163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.608632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.624423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.626865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.636175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.649758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.664133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.671041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:58:25.723080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60666","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:59:48.145933Z","caller":"traceutil/trace.go:172","msg":"trace[1094567219] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"110.759246ms","start":"2025-09-08T11:59:48.035152Z","end":"2025-09-08T11:59:48.145911Z","steps":["trace[1094567219] 'process raft request'  (duration: 110.536973ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:08:24.967157Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1016}
	{"level":"info","ts":"2025-09-08T12:08:24.991756Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1016,"took":"23.676513ms","hash":4160824975,"current-db-size-bytes":3284992,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1302528,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-08T12:08:24.991906Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4160824975,"revision":1016,"compact-revision":-1}
	{"level":"info","ts":"2025-09-08T12:13:24.974134Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1308}
	{"level":"info","ts":"2025-09-08T12:13:24.978071Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1308,"took":"3.630357ms","hash":1801963511,"current-db-size-bytes":3284992,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1880064,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-08T12:13:24.978115Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1801963511,"revision":1308,"compact-revision":1016}
	
	
	==> kernel <==
	 12:16:47 up 18 min,  0 users,  load average: 0.01, 0.14, 0.17
	Linux default-k8s-diff-port-149795 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5e77a34bd3a0a4783768c9af6e275ac7573ac7ddf7e9ee8566e830d8fd7e512f] <==
	I0908 12:13:27.351438       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:13:39.818418       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:14:14.621247       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:14:27.350273       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:14:27.350325       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:14:27.350337       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:14:27.351951       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:14:27.352006       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:14:27.352015       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:15:02.130200       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:15:14.983978       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:16:11.293683       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:16:27.350740       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:16:27.350800       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:16:27.350858       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:16:27.353094       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:16:27.353175       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:16:27.353186       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:16:35.241787       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [132e0611e671809fe2004db5b204ebb98d88547afad9f17936178a3a61691d1e] <==
	I0908 12:10:31.128365       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:11:01.007670       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:11:01.136161       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:11:31.012528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:11:31.143425       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:12:01.017006       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:12:01.150413       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:12:31.021170       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:12:31.158630       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:13:01.026787       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:13:01.167511       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:13:31.031717       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:13:31.175740       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:14:01.037568       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:14:01.183974       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:14:31.042594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:14:31.191625       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:15:01.047419       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:15:01.199107       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:15:31.052612       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:15:31.206333       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:16:01.057358       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:16:01.215798       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:16:31.062277       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:16:31.224305       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [049e2bd82da59a081bdd6cc45be2ff080f311ffc832f781396eb9328ed93c742] <==
	I0908 11:58:27.777159       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:58:27.877734       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:58:27.877812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.109"]
	E0908 11:58:27.877951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:58:27.913409       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 11:58:27.913526       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 11:58:27.913634       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:58:27.923051       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:58:27.923362       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:58:27.923405       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:58:27.931990       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:58:27.932029       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:58:27.932132       1 config.go:200] "Starting service config controller"
	I0908 11:58:27.932158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:58:27.932170       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:58:27.932174       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:58:27.933575       1 config.go:309] "Starting node config controller"
	I0908 11:58:27.933903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:58:27.933943       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:58:28.032896       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:58:28.032988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:58:28.032999       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [34c17ee824d7f491d7c07c374ca2205434be4cf56242b857e2ad06e9f30a03ab] <==
	I0908 11:58:24.454322       1 serving.go:386] Generated self-signed cert in-memory
	W0908 11:58:26.307209       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 11:58:26.307284       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 11:58:26.308880       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 11:58:26.308933       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 11:58:26.376503       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:58:26.376573       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:58:26.382583       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:58:26.382692       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:58:26.384522       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:58:26.384600       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:58:26.482989       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:16:05 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:05.150671    1202 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 08 12:16:05 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:05.150722    1202 kuberuntime_image.go:43] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 08 12:16:05 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:05.150782    1202 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-6hdsd_kube-system(c9e0e26f-f05a-4d6d-979b-711c4381d179): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 08 12:16:05 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:05.150808    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6hdsd" podUID="c9e0e26f-f05a-4d6d-979b-711c4381d179"
	Sep 08 12:16:08 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:08.111138    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp" podUID="d20477db-7399-4b1f-ad64-6cfa0fb34d60"
	Sep 08 12:16:12 default-k8s-diff-port-149795 kubelet[1202]: I0908 12:16:12.106438    1202 scope.go:117] "RemoveContainer" containerID="831b8e1914d6dfd31df3a5b00805f5c589419c96fa409985f0bd9b6ba3d8f18e"
	Sep 08 12:16:12 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:12.106563    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r9vzn_kubernetes-dashboard(f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r9vzn" podUID="f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7"
	Sep 08 12:16:12 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:12.348660    1202 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757333772348278251  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:16:12 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:12.348682    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757333772348278251  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:16:18 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:18.108740    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6hdsd" podUID="c9e0e26f-f05a-4d6d-979b-711c4381d179"
	Sep 08 12:16:22 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:22.108393    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp" podUID="d20477db-7399-4b1f-ad64-6cfa0fb34d60"
	Sep 08 12:16:22 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:22.350488    1202 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757333782350074378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:16:22 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:22.350537    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757333782350074378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:16:25 default-k8s-diff-port-149795 kubelet[1202]: I0908 12:16:25.106059    1202 scope.go:117] "RemoveContainer" containerID="831b8e1914d6dfd31df3a5b00805f5c589419c96fa409985f0bd9b6ba3d8f18e"
	Sep 08 12:16:25 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:25.106218    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r9vzn_kubernetes-dashboard(f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r9vzn" podUID="f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7"
	Sep 08 12:16:31 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:31.108266    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6hdsd" podUID="c9e0e26f-f05a-4d6d-979b-711c4381d179"
	Sep 08 12:16:32 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:32.352762    1202 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757333792352369264  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:16:32 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:32.352787    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757333792352369264  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:16:34 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:34.109128    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp" podUID="d20477db-7399-4b1f-ad64-6cfa0fb34d60"
	Sep 08 12:16:39 default-k8s-diff-port-149795 kubelet[1202]: I0908 12:16:39.106795    1202 scope.go:117] "RemoveContainer" containerID="831b8e1914d6dfd31df3a5b00805f5c589419c96fa409985f0bd9b6ba3d8f18e"
	Sep 08 12:16:39 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:39.107021    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r9vzn_kubernetes-dashboard(f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r9vzn" podUID="f6400ab9-1f7a-4025-bae5-eb4d4dc9dae7"
	Sep 08 12:16:42 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:42.354755    1202 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757333802354446496  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:16:42 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:42.354780    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757333802354446496  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 08 12:16:46 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:46.109599    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6hdsd" podUID="c9e0e26f-f05a-4d6d-979b-711c4381d179"
	Sep 08 12:16:47 default-k8s-diff-port-149795 kubelet[1202]: E0908 12:16:47.108882    1202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5hcp" podUID="d20477db-7399-4b1f-ad64-6cfa0fb34d60"
	
	
	==> storage-provisioner [2295a57e5d0f147bbdd47cb07012fadbe3fa31f4466b20fc874a981f413654bd] <==
	W0908 12:16:23.201118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:25.204916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:25.211002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:27.214061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:27.220282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:29.224507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:29.229049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:31.232188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:31.240015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:33.243455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:33.250799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:35.254963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:35.259539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:37.263124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:37.270800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:39.275320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:39.280398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:41.283419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:41.288650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:43.291220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:43.296002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:45.299757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:45.308264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:47.312178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:16:47.318879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c1d8c38b6064ace141c9fc470297bdad1b46cbfec17b7ed88917f4ed73e3f238] <==
	I0908 11:58:27.676366       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 11:58:57.679271       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-6hdsd kubernetes-dashboard-855c9754f9-h5hcp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 describe pod metrics-server-746fcd58dc-6hdsd kubernetes-dashboard-855c9754f9-h5hcp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-149795 describe pod metrics-server-746fcd58dc-6hdsd kubernetes-dashboard-855c9754f9-h5hcp: exit status 1 (58.509052ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-6hdsd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-h5hcp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-149795 describe pod metrics-server-746fcd58dc-6hdsd kubernetes-dashboard-855c9754f9-h5hcp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.53s)

                                                
                                    

Test pass (282/329)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 33.08
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 18.19
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.14
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.64
22 TestOffline 92.9
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 209.27
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 14.5
35 TestAddons/parallel/Registry 28.37
36 TestAddons/parallel/RegistryCreds 0.89
38 TestAddons/parallel/InspektorGadget 6.31
39 TestAddons/parallel/MetricsServer 5.78
41 TestAddons/parallel/CSI 69.78
42 TestAddons/parallel/Headlamp 27.9
43 TestAddons/parallel/CloudSpanner 6.59
44 TestAddons/parallel/LocalPath 69.88
45 TestAddons/parallel/NvidiaDevicePlugin 6.6
46 TestAddons/parallel/Yakd 11.96
48 TestAddons/StoppedEnableDisable 91.28
49 TestCertOptions 64.81
50 TestCertExpiration 336.72
52 TestForceSystemdFlag 101.24
53 TestForceSystemdEnv 48.5
55 TestKVMDriverInstallOrUpdate 3.96
59 TestErrorSpam/setup 50.11
60 TestErrorSpam/start 0.35
61 TestErrorSpam/status 0.79
62 TestErrorSpam/pause 1.75
63 TestErrorSpam/unpause 2.01
64 TestErrorSpam/stop 94.07
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 88.56
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 31.71
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.3
76 TestFunctional/serial/CacheCmd/cache/add_local 3.53
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 33.02
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.48
87 TestFunctional/serial/LogsFileCmd 1.49
88 TestFunctional/serial/InvalidService 5.2
90 TestFunctional/parallel/ConfigCmd 0.34
91 TestFunctional/parallel/DashboardCmd 31.9
92 TestFunctional/parallel/DryRun 0.3
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.91
98 TestFunctional/parallel/ServiceCmdConnect 27.48
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.42
103 TestFunctional/parallel/CpCmd 1.4
105 TestFunctional/parallel/FileSync 0.21
106 TestFunctional/parallel/CertSync 1.27
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
114 TestFunctional/parallel/License 0.66
115 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
117 TestFunctional/parallel/MountCmd/any-port 13.64
118 TestFunctional/parallel/ProfileCmd/profile_list 0.36
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
120 TestFunctional/parallel/ServiceCmd/List 0.48
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
123 TestFunctional/parallel/ServiceCmd/Format 0.28
124 TestFunctional/parallel/ServiceCmd/URL 0.28
134 TestFunctional/parallel/MountCmd/specific-port 1.94
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
138 TestFunctional/parallel/Version/short 0.05
139 TestFunctional/parallel/Version/components 0.61
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.38
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
145 TestFunctional/parallel/ImageCommands/ImageBuild 4.75
146 TestFunctional/parallel/ImageCommands/Setup 3.46
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.59
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.98
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.56
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.72
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 231.69
162 TestMultiControlPlane/serial/DeployApp 10.68
163 TestMultiControlPlane/serial/PingHostFromPods 1.22
164 TestMultiControlPlane/serial/AddWorkerNode 50.56
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
167 TestMultiControlPlane/serial/CopyFile 13.74
168 TestMultiControlPlane/serial/StopSecondaryNode 91.71
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
170 TestMultiControlPlane/serial/RestartSecondaryNode 61.89
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.08
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 411.87
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.55
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
175 TestMultiControlPlane/serial/StopCluster 272.79
176 TestMultiControlPlane/serial/RestartCluster 101.39
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 90.94
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
183 TestJSONOutput/start/Command 85.27
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.79
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.7
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.36
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 93.2
215 TestMountStart/serial/StartWithMountFirst 29.4
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 30.88
218 TestMountStart/serial/VerifyMountSecond 0.39
219 TestMountStart/serial/DeleteFirst 0.91
220 TestMountStart/serial/VerifyMountPostDelete 0.39
221 TestMountStart/serial/Stop 1.76
222 TestMountStart/serial/RestartStopped 23.16
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 116.02
227 TestMultiNode/serial/DeployApp2Nodes 9.72
228 TestMultiNode/serial/PingHostFrom2Pods 0.81
229 TestMultiNode/serial/AddNode 52.64
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.6
232 TestMultiNode/serial/CopyFile 7.39
233 TestMultiNode/serial/StopNode 3.18
234 TestMultiNode/serial/StartAfterStop 44.21
235 TestMultiNode/serial/RestartKeepsNodes 336.59
236 TestMultiNode/serial/DeleteNode 2.94
237 TestMultiNode/serial/StopMultiNode 181.93
238 TestMultiNode/serial/RestartMultiNode 109.05
239 TestMultiNode/serial/ValidateNameConflict 48.01
246 TestScheduledStopUnix 118.66
250 TestRunningBinaryUpgrade 167.47
252 TestKubernetesUpgrade 193.55
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 124.29
264 TestNetworkPlugins/group/false 3.21
268 TestStoppedBinaryUpgrade/Setup 3.7
269 TestStoppedBinaryUpgrade/Upgrade 173.76
270 TestNoKubernetes/serial/StartWithStopK8s 66.13
271 TestNoKubernetes/serial/Start 53.14
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
273 TestNoKubernetes/serial/ProfileList 6.71
274 TestNoKubernetes/serial/Stop 1.47
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
285 TestPause/serial/Start 129.13
286 TestNetworkPlugins/group/auto/Start 132.75
287 TestNetworkPlugins/group/kindnet/Start 97.67
288 TestPause/serial/SecondStartNoReconfiguration 37.32
289 TestNetworkPlugins/group/auto/KubeletFlags 0.23
290 TestNetworkPlugins/group/auto/NetCatPod 11.22
291 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
292 TestNetworkPlugins/group/auto/DNS 0.15
293 TestNetworkPlugins/group/auto/Localhost 0.13
294 TestNetworkPlugins/group/auto/HairPin 0.12
295 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
296 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
297 TestPause/serial/Pause 1.1
298 TestPause/serial/VerifyStatus 0.29
299 TestPause/serial/Unpause 0.84
300 TestPause/serial/PauseAgain 0.92
301 TestPause/serial/DeletePaused 0.88
302 TestPause/serial/VerifyDeletedResources 0.79
303 TestNetworkPlugins/group/calico/Start 279.01
304 TestNetworkPlugins/group/kindnet/DNS 0.18
305 TestNetworkPlugins/group/kindnet/Localhost 0.14
306 TestNetworkPlugins/group/kindnet/HairPin 0.21
307 TestNetworkPlugins/group/custom-flannel/Start 103.07
308 TestNetworkPlugins/group/enable-default-cni/Start 125.38
309 TestNetworkPlugins/group/flannel/Start 82.57
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
312 TestNetworkPlugins/group/custom-flannel/DNS 0.2
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
314 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
315 TestNetworkPlugins/group/bridge/Start 98.14
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
322 TestStartStop/group/old-k8s-version/serial/FirstStart 71.89
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
325 TestNetworkPlugins/group/flannel/NetCatPod 9.27
326 TestNetworkPlugins/group/flannel/DNS 0.18
327 TestNetworkPlugins/group/flannel/Localhost 0.12
328 TestNetworkPlugins/group/flannel/HairPin 0.13
330 TestStartStop/group/no-preload/serial/FirstStart 104.13
331 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
332 TestNetworkPlugins/group/bridge/NetCatPod 11.28
333 TestStartStop/group/old-k8s-version/serial/DeployApp 12.35
334 TestNetworkPlugins/group/bridge/DNS 0.15
335 TestNetworkPlugins/group/bridge/Localhost 0.19
336 TestNetworkPlugins/group/bridge/HairPin 0.14
337 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.73
338 TestStartStop/group/old-k8s-version/serial/Stop 91.83
340 TestStartStop/group/embed-certs/serial/FirstStart 88.71
341 TestNetworkPlugins/group/calico/ControllerPod 6.01
342 TestNetworkPlugins/group/calico/KubeletFlags 0.26
343 TestNetworkPlugins/group/calico/NetCatPod 10.24
344 TestNetworkPlugins/group/calico/DNS 0.2
345 TestNetworkPlugins/group/calico/Localhost 0.16
346 TestNetworkPlugins/group/calico/HairPin 0.2
348 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.08
349 TestStartStop/group/no-preload/serial/DeployApp 14.31
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
351 TestStartStop/group/no-preload/serial/Stop 91.12
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
353 TestStartStop/group/old-k8s-version/serial/SecondStart 44.94
354 TestStartStop/group/embed-certs/serial/DeployApp 14.3
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
356 TestStartStop/group/embed-certs/serial/Stop 91.05
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 19.01
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 14.29
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
361 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.51
362 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
363 TestStartStop/group/old-k8s-version/serial/Pause 2.77
365 TestStartStop/group/newest-cni/serial/FirstStart 47.7
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
367 TestStartStop/group/no-preload/serial/SecondStart 74.19
368 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
369 TestStartStop/group/embed-certs/serial/SecondStart 59.47
370 TestStartStop/group/newest-cni/serial/DeployApp 0
371 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
372 TestStartStop/group/newest-cni/serial/Stop 11.42
373 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
374 TestStartStop/group/newest-cni/serial/SecondStart 49.59
375 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
376 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 60.44
377 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 23.01
378 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
380 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
382 TestStartStop/group/embed-certs/serial/Pause 3.25
383 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
386 TestStartStop/group/newest-cni/serial/Pause 3.4
387 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
388 TestStartStop/group/no-preload/serial/Pause 3.78
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.68
x
+
TestDownloadOnly/v1.28.0/json-events (33.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-613558 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-613558 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (33.083462759s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (33.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 10:29:41.161593  752332 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 10:29:41.161718  752332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-613558
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-613558: exit status 85 (62.06218ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-613558 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-613558 │ jenkins │ v1.36.0 │ 08 Sep 25 10:29 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:29:08
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:29:08.119546  752344 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:29:08.119796  752344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:29:08.119807  752344 out.go:374] Setting ErrFile to fd 2...
	I0908 10:29:08.119813  752344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:29:08.120046  752344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	W0908 10:29:08.120195  752344 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21503-748170/.minikube/config/config.json: open /home/jenkins/minikube-integration/21503-748170/.minikube/config/config.json: no such file or directory
	I0908 10:29:08.120779  752344 out.go:368] Setting JSON to true
	I0908 10:29:08.121785  752344 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":69064,"bootTime":1757258284,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:29:08.121843  752344 start.go:140] virtualization: kvm guest
	I0908 10:29:08.123913  752344 out.go:99] [download-only-613558] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:29:08.124091  752344 notify.go:220] Checking for updates...
	W0908 10:29:08.124138  752344 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 10:29:08.125375  752344 out.go:171] MINIKUBE_LOCATION=21503
	I0908 10:29:08.126612  752344 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:29:08.127704  752344 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 10:29:08.128754  752344 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 10:29:08.129695  752344 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 10:29:08.131600  752344 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 10:29:08.131817  752344 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:29:08.168107  752344 out.go:99] Using the kvm2 driver based on user configuration
	I0908 10:29:08.168153  752344 start.go:304] selected driver: kvm2
	I0908 10:29:08.168162  752344 start.go:918] validating driver "kvm2" against <nil>
	I0908 10:29:08.168516  752344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 10:29:08.168616  752344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21503-748170/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0908 10:29:08.172355  752344 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0908 10:29:08.173670  752344 out.go:99] Downloading driver docker-machine-driver-kvm2:
	I0908 10:29:08.173775  752344 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:29:09.741422  752344 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 10:29:09.742020  752344 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0908 10:29:09.742176  752344 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 10:29:09.742218  752344 cni.go:84] Creating CNI manager for ""
	I0908 10:29:09.742265  752344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 10:29:09.742273  752344 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 10:29:09.742340  752344 start.go:348] cluster config:
	{Name:download-only-613558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-613558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:29:09.742506  752344 iso.go:125] acquiring lock: {Name:mk013a3bcd14eba8870ec8e08630600588ab11c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 10:29:09.744610  752344 out.go:99] Downloading VM boot image ...
	I0908 10:29:09.744639  752344 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21503-748170/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 10:29:22.779472  752344 out.go:99] Starting "download-only-613558" primary control-plane node in "download-only-613558" cluster
	I0908 10:29:22.779502  752344 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 10:29:22.931433  752344 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 10:29:22.931473  752344 cache.go:58] Caching tarball of preloaded images
	I0908 10:29:22.931710  752344 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 10:29:22.933518  752344 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 10:29:22.933547  752344 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 10:29:23.089339  752344 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-613558 host does not exist
	  To start a cluster, run: "minikube start -p download-only-613558"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-613558
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (18.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-049029 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-049029 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (18.18922574s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (18.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 10:29:59.679741  752332 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 10:29:59.679812  752332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-049029
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-049029: exit status 85 (63.555315ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-613558 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-613558 │ jenkins │ v1.36.0 │ 08 Sep 25 10:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 10:29 UTC │ 08 Sep 25 10:29 UTC │
	│ delete  │ -p download-only-613558                                                                                                                                                 │ download-only-613558 │ jenkins │ v1.36.0 │ 08 Sep 25 10:29 UTC │ 08 Sep 25 10:29 UTC │
	│ start   │ -o=json --download-only -p download-only-049029 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-049029 │ jenkins │ v1.36.0 │ 08 Sep 25 10:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:29:41
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:29:41.532799  752621 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:29:41.533100  752621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:29:41.533114  752621 out.go:374] Setting ErrFile to fd 2...
	I0908 10:29:41.533119  752621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:29:41.533335  752621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 10:29:41.533983  752621 out.go:368] Setting JSON to true
	I0908 10:29:41.534822  752621 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":69098,"bootTime":1757258284,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:29:41.534925  752621 start.go:140] virtualization: kvm guest
	I0908 10:29:41.536776  752621 out.go:99] [download-only-049029] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:29:41.536955  752621 notify.go:220] Checking for updates...
	I0908 10:29:41.538183  752621 out.go:171] MINIKUBE_LOCATION=21503
	I0908 10:29:41.539542  752621 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:29:41.540685  752621 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 10:29:41.541721  752621 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 10:29:41.542829  752621 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 10:29:41.544722  752621 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 10:29:41.544977  752621 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:29:41.576811  752621 out.go:99] Using the kvm2 driver based on user configuration
	I0908 10:29:41.576848  752621 start.go:304] selected driver: kvm2
	I0908 10:29:41.576859  752621 start.go:918] validating driver "kvm2" against <nil>
	I0908 10:29:41.577286  752621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 10:29:41.577395  752621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21503-748170/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 10:29:41.593036  752621 install.go:137] /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 10:29:41.593088  752621 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 10:29:41.593762  752621 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0908 10:29:41.593979  752621 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 10:29:41.594017  752621 cni.go:84] Creating CNI manager for ""
	I0908 10:29:41.594090  752621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 10:29:41.594102  752621 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 10:29:41.594180  752621 start.go:348] cluster config:
	{Name:download-only-049029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-049029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:29:41.594316  752621 iso.go:125] acquiring lock: {Name:mk013a3bcd14eba8870ec8e08630600588ab11c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 10:29:41.595765  752621 out.go:99] Starting "download-only-049029" primary control-plane node in "download-only-049029" cluster
	I0908 10:29:41.595782  752621 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 10:29:41.819670  752621 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 10:29:41.819712  752621 cache.go:58] Caching tarball of preloaded images
	I0908 10:29:41.819916  752621 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 10:29:41.821569  752621 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 10:29:41.821589  752621 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 10:29:41.973470  752621 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21503-748170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-049029 host does not exist
	  To start a cluster, run: "minikube start -p download-only-049029"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-049029
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 10:30:00.279970  752332 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-286578 --alsologtostderr --binary-mirror http://127.0.0.1:39233 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-286578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-286578
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (92.9s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-894901 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-894901 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m31.782998585s)
helpers_test.go:175: Cleaning up "offline-crio-894901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-894901
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-894901: (1.118242416s)
--- PASS: TestOffline (92.90s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-451875
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-451875: exit status 85 (53.632248ms)

                                                
                                                
-- stdout --
	* Profile "addons-451875" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-451875"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-451875
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-451875: exit status 85 (55.54972ms)

                                                
                                                
-- stdout --
	* Profile "addons-451875" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-451875"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (209.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-451875 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-451875 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m29.273339258s)
--- PASS: TestAddons/Setup (209.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-451875 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-451875 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (14.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-451875 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-451875 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a8c1d3f8-0cf8-417f-84a4-d6271a60b5cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a8c1d3f8-0cf8-417f-84a4-d6271a60b5cf] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 14.004703517s
addons_test.go:694: (dbg) Run:  kubectl --context addons-451875 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-451875 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-451875 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (14.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (28.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.468053ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-v5x6w" [3db84b88-8a2e-45b9-9019-7c26805a646c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003088847s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-58n5b" [a453df18-bf9a-4b07-9f85-b98dd83f4a43] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003907982s
addons_test.go:392: (dbg) Run:  kubectl --context addons-451875 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-451875 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-451875 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (16.447024173s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 ip
2025/09/08 10:34:21 [DEBUG] GET http://192.168.39.92:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (28.37s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.89s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.728817ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-451875
addons_test.go:332: (dbg) Run:  kubectl --context addons-451875 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-kl2d9" [0a6b472c-8142-4fff-abd2-077d309f569f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004502033s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.69753ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-s4lpz" [9a1e2579-44a0-42c3-84fd-567e80c96fc1] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005169362s
addons_test.go:463: (dbg) Run:  kubectl --context addons-451875 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 10:34:17.928013  752332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 10:34:17.934241  752332 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 10:34:17.934272  752332 kapi.go:107] duration metric: took 6.290215ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.303542ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-451875 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-451875 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [4469c03f-fb9d-4043-9219-c7e1880f6c6f] Pending
helpers_test.go:352: "task-pv-pod" [4469c03f-fb9d-4043-9219-c7e1880f6c6f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [4469c03f-fb9d-4043-9219-c7e1880f6c6f] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.003511798s
addons_test.go:572: (dbg) Run:  kubectl --context addons-451875 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-451875 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-451875 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-451875 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-451875 delete pod task-pv-pod: (1.138857956s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-451875 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-451875 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-451875 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8ed345f3-826c-4ac8-b424-1a217099cdd0] Pending
helpers_test.go:352: "task-pv-pod-restore" [8ed345f3-826c-4ac8-b424-1a217099cdd0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8ed345f3-826c-4ac8-b424-1a217099cdd0] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.007490434s
addons_test.go:614: (dbg) Run:  kubectl --context addons-451875 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-451875 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-451875 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-451875 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.925905188s)
--- PASS: TestAddons/parallel/CSI (69.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (27.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-451875 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-j7k6h" [91b6b473-5c52-4e8e-bfaa-2eb05340e424] Pending
helpers_test.go:352: "headlamp-6f46646d79-j7k6h" [91b6b473-5c52-4e8e-bfaa-2eb05340e424] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-j7k6h" [91b6b473-5c52-4e8e-bfaa-2eb05340e424] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.007363726s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-451875 addons disable headlamp --alsologtostderr -v=1: (5.934130934s)
--- PASS: TestAddons/parallel/Headlamp (27.90s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-vx48l" [9da8dff6-654b-4328-8534-b5fa685c714b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004159495s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (69.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-451875 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-451875 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6179b0df-7354-4732-999e-caa6fc1d9b78] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6179b0df-7354-4732-999e-caa6fc1d9b78] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6179b0df-7354-4732-999e-caa6fc1d9b78] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 10.004095609s
addons_test.go:967: (dbg) Run:  kubectl --context addons-451875 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 ssh "cat /opt/local-path-provisioner/pvc-2a2fc39d-914a-4def-bafb-67a8b986f998_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-451875 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-451875 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-451875 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.046926074s)
--- PASS: TestAddons/parallel/LocalPath (69.88s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-w6bbw" [248f80b5-4ed1-4698-ac0d-9cd7d127bbf2] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004326252s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xzjx2" [38c26ce6-7abb-4016-9f9a-12a58612cbd9] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004591325s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-451875 addons disable yakd --alsologtostderr -v=1: (5.951047495s)
--- PASS: TestAddons/parallel/Yakd (11.96s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-451875
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-451875: (1m30.989776789s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-451875
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-451875
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-451875
--- PASS: TestAddons/StoppedEnableDisable (91.28s)

                                                
                                    
x
+
TestCertOptions (64.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-312456 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-312456 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m3.305789695s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-312456 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-312456 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-312456 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-312456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-312456
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-312456: (1.028944989s)
--- PASS: TestCertOptions (64.81s)

                                                
                                    
x
+
TestCertExpiration (336.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-535057 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-535057 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m24.796568713s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-535057 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0908 11:49:53.959691  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-535057 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m10.793707539s)
helpers_test.go:175: Cleaning up "cert-expiration-535057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-535057
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-535057: (1.124971501s)
--- PASS: TestCertExpiration (336.72s)

                                                
                                    
x
+
TestForceSystemdFlag (101.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-950564 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-950564 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m40.009586589s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-950564 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-950564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-950564
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-950564: (1.032610052s)
--- PASS: TestForceSystemdFlag (101.24s)

                                                
                                    
x
+
TestForceSystemdEnv (48.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-118477 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-118477 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.656398627s)
helpers_test.go:175: Cleaning up "force-systemd-env-118477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-118477
--- PASS: TestForceSystemdEnv (48.50s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.96s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0908 11:42:34.645489  752332 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 11:42:34.645710  752332 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0908 11:42:34.676456  752332 install.go:62] docker-machine-driver-kvm2: exit status 1
W0908 11:42:34.676802  752332 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 11:42:34.676872  752332 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4005954193/001/docker-machine-driver-kvm2
I0908 11:42:35.277808  752332 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4005954193/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc00051d3b0 gz:0xc00051d3b8 tar:0xc00051d360 tar.bz2:0xc00051d370 tar.gz:0xc00051d380 tar.xz:0xc00051d390 tar.zst:0xc00051d3a0 tbz2:0xc00051d370 tgz:0xc00051d380 txz:0xc00051d390 tzst:0xc00051d3a0 xz:0xc00051d3f0 zip:0xc00051d450 zst:0xc00051d3f8] Getters:map[file:0xc0014ab500 http:0xc0005a5810 https:0xc0005a5860] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 11:42:35.277867  752332 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4005954193/001/docker-machine-driver-kvm2
I0908 11:42:37.303763  752332 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 11:42:37.303864  752332 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0908 11:42:37.333161  752332 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0908 11:42:37.333196  752332 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0908 11:42:37.333281  752332 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 11:42:37.333309  752332 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4005954193/002/docker-machine-driver-kvm2
I0908 11:42:37.652864  752332 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4005954193/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc00051d3b0 gz:0xc00051d3b8 tar:0xc00051d360 tar.bz2:0xc00051d370 tar.gz:0xc00051d380 tar.xz:0xc00051d390 tar.zst:0xc00051d3a0 tbz2:0xc00051d370 tgz:0xc00051d380 txz:0xc00051d390 tzst:0xc00051d3a0 xz:0xc00051d3f0 zip:0xc00051d450 zst:0xc00051d3f8] Getters:map[file:0xc0018e2840 http:0xc00047d220 https:0xc00047d270] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 11:42:37.652914  752332 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4005954193/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.96s)

                                                
                                    
x
+
TestErrorSpam/setup (50.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-776267 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-776267 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-776267 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-776267 --driver=kvm2  --container-runtime=crio: (50.112340729s)
--- PASS: TestErrorSpam/setup (50.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 unpause
--- PASS: TestErrorSpam/unpause (2.01s)

                                                
                                    
x
+
TestErrorSpam/stop (94.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 stop: (1m30.906500063s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 stop: (1.360444147s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-776267 --log_dir /tmp/nospam-776267 stop: (1.800396761s)
--- PASS: TestErrorSpam/stop (94.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21503-748170/.minikube/files/etc/test/nested/copy/752332/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-461050 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-461050 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m28.563753493s)
--- PASS: TestFunctional/serial/StartWithProxy (88.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 10:42:34.273134  752332 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-461050 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-461050 --alsologtostderr -v=8: (31.707440995s)
functional_test.go:678: soft start took 31.708116047s for "functional-461050" cluster.
I0908 10:43:05.980970  752332 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (31.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-461050 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 cache add registry.k8s.io/pause:3.1: (1.079899948s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 cache add registry.k8s.io/pause:3.3: (1.108931479s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 cache add registry.k8s.io/pause:latest: (1.114147504s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-461050 /tmp/TestFunctionalserialCacheCmdcacheadd_local3890714155/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 cache add minikube-local-cache-test:functional-461050
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 cache add minikube-local-cache-test:functional-461050: (3.239564803s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 cache delete minikube-local-cache-test:functional-461050
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-461050
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.684027ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 kubectl -- --context functional-461050 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-461050 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-461050 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 10:43:30.888487  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:30.894984  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:30.906443  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:30.927876  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:30.969380  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:31.050916  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:31.212504  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:31.534290  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:32.176504  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:33.457939  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:36.019395  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:43:41.140967  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-461050 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.017566961s)
functional_test.go:776: restart took 33.01770764s for "functional-461050" cluster.
I0908 10:43:48.279095  752332 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (33.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-461050 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 logs: (1.477631155s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 logs --file /tmp/TestFunctionalserialLogsFileCmd4289990220/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 logs --file /tmp/TestFunctionalserialLogsFileCmd4289990220/001/logs.txt: (1.487611475s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-461050 apply -f testdata/invalidsvc.yaml
E0908 10:43:51.382801  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-461050
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-461050: exit status 115 (300.346766ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.94:31072 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-461050 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-461050 delete -f testdata/invalidsvc.yaml: (1.700346723s)
--- PASS: TestFunctional/serial/InvalidService (5.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 config get cpus: exit status 14 (54.762475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 config get cpus: exit status 14 (48.537739ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (31.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-461050 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-461050 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 760465: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (31.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-461050 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-461050 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (156.375926ms)

                                                
                                                
-- stdout --
	* [functional-461050] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 10:43:58.213592  760244 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:43:58.213886  760244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:58.213899  760244 out.go:374] Setting ErrFile to fd 2...
	I0908 10:43:58.213906  760244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:58.214138  760244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 10:43:58.214666  760244 out.go:368] Setting JSON to false
	I0908 10:43:58.215713  760244 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":69954,"bootTime":1757258284,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:43:58.215804  760244 start.go:140] virtualization: kvm guest
	I0908 10:43:58.217452  760244 out.go:179] * [functional-461050] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:43:58.218745  760244 notify.go:220] Checking for updates...
	I0908 10:43:58.219428  760244 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 10:43:58.220760  760244 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:43:58.221931  760244 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 10:43:58.223197  760244 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 10:43:58.224203  760244 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 10:43:58.225182  760244 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 10:43:58.226608  760244 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:43:58.227194  760244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:43:58.227252  760244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:43:58.249755  760244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36491
	I0908 10:43:58.250352  760244 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:43:58.250929  760244 main.go:141] libmachine: Using API Version  1
	I0908 10:43:58.250945  760244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:43:58.251438  760244 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:43:58.251677  760244 main.go:141] libmachine: (functional-461050) Calling .DriverName
	I0908 10:43:58.251856  760244 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:43:58.252171  760244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:43:58.252202  760244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:43:58.271919  760244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I0908 10:43:58.272454  760244 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:43:58.273037  760244 main.go:141] libmachine: Using API Version  1
	I0908 10:43:58.273076  760244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:43:58.273524  760244 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:43:58.273775  760244 main.go:141] libmachine: (functional-461050) Calling .DriverName
	I0908 10:43:58.311841  760244 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 10:43:58.312924  760244 start.go:304] selected driver: kvm2
	I0908 10:43:58.312939  760244 start.go:918] validating driver "kvm2" against &{Name:functional-461050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-461050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:58.313036  760244 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 10:43:58.315112  760244 out.go:203] 
	W0908 10:43:58.316249  760244 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 10:43:58.317287  760244 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-461050 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-461050 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-461050 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.058625ms)

                                                
                                                
-- stdout --
	* [functional-461050] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 10:43:58.069031  760185 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:43:58.069126  760185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:58.069131  760185 out.go:374] Setting ErrFile to fd 2...
	I0908 10:43:58.069135  760185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:58.069538  760185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 10:43:58.070043  760185 out.go:368] Setting JSON to false
	I0908 10:43:58.071079  760185 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":69954,"bootTime":1757258284,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:43:58.071153  760185 start.go:140] virtualization: kvm guest
	I0908 10:43:58.072932  760185 out.go:179] * [functional-461050] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 10:43:58.074124  760185 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 10:43:58.074164  760185 notify.go:220] Checking for updates...
	I0908 10:43:58.076114  760185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:43:58.077306  760185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 10:43:58.078294  760185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 10:43:58.079278  760185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 10:43:58.080263  760185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 10:43:58.081597  760185 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:43:58.082026  760185 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:43:58.082086  760185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:43:58.098052  760185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0908 10:43:58.098546  760185 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:43:58.099209  760185 main.go:141] libmachine: Using API Version  1
	I0908 10:43:58.099242  760185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:43:58.099624  760185 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:43:58.099835  760185 main.go:141] libmachine: (functional-461050) Calling .DriverName
	I0908 10:43:58.100141  760185 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:43:58.100580  760185 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 10:43:58.100631  760185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 10:43:58.120150  760185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44647
	I0908 10:43:58.120722  760185 main.go:141] libmachine: () Calling .GetVersion
	I0908 10:43:58.121439  760185 main.go:141] libmachine: Using API Version  1
	I0908 10:43:58.121462  760185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 10:43:58.121862  760185 main.go:141] libmachine: () Calling .GetMachineName
	I0908 10:43:58.122067  760185 main.go:141] libmachine: (functional-461050) Calling .DriverName
	I0908 10:43:58.155926  760185 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0908 10:43:58.156823  760185 start.go:304] selected driver: kvm2
	I0908 10:43:58.156839  760185 start.go:918] validating driver "kvm2" against &{Name:functional-461050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-461050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:58.156981  760185 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 10:43:58.159118  760185 out.go:203] 
	W0908 10:43:58.160096  760185 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 10:43:58.161284  760185 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-461050 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-461050 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-fw5qz" [ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-fw5qz" [ad93be4e-3abd-4fc6-a8d6-2d44ecab1f22] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 27.004022202s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.94:31318
functional_test.go:1680: http://192.168.39.94:31318: success! body:
Request served by hello-node-connect-7d85dfc575-fw5qz

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.94:31318
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
E0908 10:44:52.826863  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:46:14.748342  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:48:30.882977  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:48:58.589834  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 addons list
E0908 10:44:11.864767  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh -n functional-461050 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 cp functional-461050:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd309521879/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh -n functional-461050 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh -n functional-461050 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/752332/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo cat /etc/test/nested/copy/752332/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/752332.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo cat /etc/ssl/certs/752332.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/752332.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo cat /usr/share/ca-certificates/752332.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7523322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo cat /etc/ssl/certs/7523322.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7523322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo cat /usr/share/ca-certificates/7523322.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-461050 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 ssh "sudo systemctl is-active docker": exit status 1 (200.543185ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 ssh "sudo systemctl is-active containerd": exit status 1 (206.624871ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-461050 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-461050 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-wq9fk" [cd5a8578-1c51-4e6b-8d77-4d87fce03552] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-wq9fk" [cd5a8578-1c51-4e6b-8d77-4d87fce03552] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.006407911s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdany-port3097749525/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757328237180600687" to /tmp/TestFunctionalparallelMountCmdany-port3097749525/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757328237180600687" to /tmp/TestFunctionalparallelMountCmdany-port3097749525/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757328237180600687" to /tmp/TestFunctionalparallelMountCmdany-port3097749525/001/test-1757328237180600687
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (213.331201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 10:43:57.394332  752332 retry.go:31] will retry after 597.571475ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 10:43 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 10:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 10:43 test-1757328237180600687
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh cat /mount-9p/test-1757328237180600687
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-461050 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [21916ec1-56c0-46e0-bdf9-d3c96579dfa2] Pending
helpers_test.go:352: "busybox-mount" [21916ec1-56c0-46e0-bdf9-d3c96579dfa2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [21916ec1-56c0-46e0-bdf9-d3c96579dfa2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [21916ec1-56c0-46e0-bdf9-d3c96579dfa2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.004575573s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-461050 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdany-port3097749525/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "312.479387ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "48.747651ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "290.621323ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.364896ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 service list -o json
functional_test.go:1504: Took "430.6233ms" to run "out/minikube-linux-amd64 -p functional-461050 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.94:30375
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.94:30375
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdspecific-port4044613271/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (217.919954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 10:44:11.035479  752332 retry.go:31] will retry after 648.603324ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdspecific-port4044613271/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 ssh "sudo umount -f /mount-9p": exit status 1 (227.955203ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-461050 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdspecific-port4044613271/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3033730451/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3033730451/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3033730451/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T" /mount1: exit status 1 (292.626618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 10:44:13.055257  752332 retry.go:31] will retry after 437.400712ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-461050 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3033730451/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3033730451/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-461050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3033730451/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-461050 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-461050
localhost/kicbase/echo-server:functional-461050
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-461050 image ls --format short --alsologtostderr:
I0908 10:44:31.463218  762122 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:31.463506  762122 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:31.463518  762122 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:31.463522  762122 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:31.465200  762122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
I0908 10:44:31.465906  762122 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:31.466000  762122 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:31.466326  762122 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:31.466393  762122 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:31.482456  762122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
I0908 10:44:31.483110  762122 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:31.483840  762122 main.go:141] libmachine: Using API Version  1
I0908 10:44:31.483863  762122 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:31.484306  762122 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:31.484561  762122 main.go:141] libmachine: (functional-461050) Calling .GetState
I0908 10:44:31.486638  762122 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:31.486680  762122 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:31.502461  762122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
I0908 10:44:31.502922  762122 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:31.503503  762122 main.go:141] libmachine: Using API Version  1
I0908 10:44:31.503542  762122 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:31.503910  762122 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:31.504101  762122 main.go:141] libmachine: (functional-461050) Calling .DriverName
I0908 10:44:31.504306  762122 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:31.504330  762122 main.go:141] libmachine: (functional-461050) Calling .GetSSHHostname
I0908 10:44:31.507215  762122 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:31.507637  762122 main.go:141] libmachine: (functional-461050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0f:60", ip: ""} in network mk-functional-461050: {Iface:virbr1 ExpiryTime:2025-09-08 11:41:22 +0000 UTC Type:0 Mac:52:54:00:11:0f:60 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:functional-461050 Clientid:01:52:54:00:11:0f:60}
I0908 10:44:31.507672  762122 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined IP address 192.168.39.94 and MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:31.507877  762122 main.go:141] libmachine: (functional-461050) Calling .GetSSHPort
I0908 10:44:31.508056  762122 main.go:141] libmachine: (functional-461050) Calling .GetSSHKeyPath
I0908 10:44:31.508219  762122 main.go:141] libmachine: (functional-461050) Calling .GetSSHUsername
I0908 10:44:31.508350  762122 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/functional-461050/id_rsa Username:docker}
I0908 10:44:31.591901  762122 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 10:44:31.631556  762122 main.go:141] libmachine: Making call to close driver server
I0908 10:44:31.631568  762122 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:31.631891  762122 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:31.631914  762122 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 10:44:31.631903  762122 main.go:141] libmachine: (functional-461050) DBG | Closing plugin on server side
I0908 10:44:31.631926  762122 main.go:141] libmachine: Making call to close driver server
I0908 10:44:31.632029  762122 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:31.632331  762122 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:31.632346  762122 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-461050 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-461050  │ b0a466b21130f │ 1.47MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ localhost/minikube-local-cache-test     │ functional-461050  │ 7a43f2a24abe5 │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-461050  │ 9056ab77afb8e │ 4.95MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-461050 image ls --format table --alsologtostderr:
I0908 10:44:36.891520  762289 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:36.891751  762289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:36.891759  762289 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:36.891762  762289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:36.891947  762289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
I0908 10:44:36.892516  762289 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:36.892609  762289 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:36.892970  762289 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:36.893034  762289 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:36.908546  762289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
I0908 10:44:36.909034  762289 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:36.909585  762289 main.go:141] libmachine: Using API Version  1
I0908 10:44:36.909610  762289 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:36.909965  762289 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:36.910130  762289 main.go:141] libmachine: (functional-461050) Calling .GetState
I0908 10:44:36.912010  762289 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:36.912060  762289 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:36.926802  762289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
I0908 10:44:36.927235  762289 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:36.927658  762289 main.go:141] libmachine: Using API Version  1
I0908 10:44:36.927678  762289 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:36.927974  762289 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:36.928191  762289 main.go:141] libmachine: (functional-461050) Calling .DriverName
I0908 10:44:36.928381  762289 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:36.928411  762289 main.go:141] libmachine: (functional-461050) Calling .GetSSHHostname
I0908 10:44:36.931086  762289 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:36.931453  762289 main.go:141] libmachine: (functional-461050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0f:60", ip: ""} in network mk-functional-461050: {Iface:virbr1 ExpiryTime:2025-09-08 11:41:22 +0000 UTC Type:0 Mac:52:54:00:11:0f:60 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:functional-461050 Clientid:01:52:54:00:11:0f:60}
I0908 10:44:36.931482  762289 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined IP address 192.168.39.94 and MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:36.931608  762289 main.go:141] libmachine: (functional-461050) Calling .GetSSHPort
I0908 10:44:36.931772  762289 main.go:141] libmachine: (functional-461050) Calling .GetSSHKeyPath
I0908 10:44:36.931896  762289 main.go:141] libmachine: (functional-461050) Calling .GetSSHUsername
I0908 10:44:36.932021  762289 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/functional-461050/id_rsa Username:docker}
I0908 10:44:37.012339  762289 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 10:44:37.051811  762289 main.go:141] libmachine: Making call to close driver server
I0908 10:44:37.051829  762289 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:37.052174  762289 main.go:141] libmachine: (functional-461050) DBG | Closing plugin on server side
I0908 10:44:37.052174  762289 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:37.052205  762289 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 10:44:37.052215  762289 main.go:141] libmachine: Making call to close driver server
I0908 10:44:37.052220  762289 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:37.052483  762289 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:37.052500  762289 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 10:44:37.052522  762289 main.go:141] libmachine: (functional-461050) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-461050 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367b
f5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-ser
ver@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-461050"],"size":"4945146"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e
3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"c21f77027709de581dfb4f2ae29213104b6f4d7e9aa3c5d47294996376e75b8b","repoDigests":["docker.io/library/f8c14414b9544c625c47eb26500ff5dd9e77bf06ec099ed89b60b184053bd925-tmp@sha256:914b8c393c99be55b19b01e08370af419972cd5a0c62364bf2d21d18ac98ec89"],"repoTags":[],"size":"1466018"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"7a43f2a24abe53dc764bc8c9ae3aa72436c563b2ace24112bd0eb23dca07d978","repoDigests":["localhost/minikube-local-cache-test@sha256:a79b6e083767e214f8b8e8bbd52d39e8faa6452db07336f62e42503e10937bb0"],"repoTags":["localhos
t/minikube-local-cache-test:functional-461050"],"size":"3330"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85
c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"b0a466b21130fd45231670f1e1cc09e887eba7d325d3e618bf2bb5ba50440ff5","repoDigests":["localhost/my-image@sha256:bd23b1b834bee7392bf7281272f81c1f28984cc63578c4c3ecf4f5f456c923bb"],"repoTags":["localhost/my-image:functional-461050"],"size":"1468600"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4
c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDiges
ts":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-461050 image ls --format json --alsologtostderr:
I0908 10:44:36.671470  762264 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:36.671574  762264 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:36.671582  762264 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:36.671586  762264 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:36.671843  762264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
I0908 10:44:36.672481  762264 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:36.672581  762264 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:36.672967  762264 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:36.673045  762264 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:36.688003  762264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43141
I0908 10:44:36.688458  762264 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:36.688975  762264 main.go:141] libmachine: Using API Version  1
I0908 10:44:36.689003  762264 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:36.689386  762264 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:36.689576  762264 main.go:141] libmachine: (functional-461050) Calling .GetState
I0908 10:44:36.691422  762264 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:36.691471  762264 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:36.706107  762264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37733
I0908 10:44:36.706524  762264 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:36.707016  762264 main.go:141] libmachine: Using API Version  1
I0908 10:44:36.707039  762264 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:36.707401  762264 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:36.707592  762264 main.go:141] libmachine: (functional-461050) Calling .DriverName
I0908 10:44:36.707811  762264 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:36.707857  762264 main.go:141] libmachine: (functional-461050) Calling .GetSSHHostname
I0908 10:44:36.710533  762264 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:36.710889  762264 main.go:141] libmachine: (functional-461050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0f:60", ip: ""} in network mk-functional-461050: {Iface:virbr1 ExpiryTime:2025-09-08 11:41:22 +0000 UTC Type:0 Mac:52:54:00:11:0f:60 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:functional-461050 Clientid:01:52:54:00:11:0f:60}
I0908 10:44:36.710916  762264 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined IP address 192.168.39.94 and MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:36.711047  762264 main.go:141] libmachine: (functional-461050) Calling .GetSSHPort
I0908 10:44:36.711225  762264 main.go:141] libmachine: (functional-461050) Calling .GetSSHKeyPath
I0908 10:44:36.711353  762264 main.go:141] libmachine: (functional-461050) Calling .GetSSHUsername
I0908 10:44:36.711496  762264 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/functional-461050/id_rsa Username:docker}
I0908 10:44:36.794979  762264 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 10:44:36.835940  762264 main.go:141] libmachine: Making call to close driver server
I0908 10:44:36.835954  762264 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:36.836274  762264 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:36.836318  762264 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 10:44:36.836327  762264 main.go:141] libmachine: (functional-461050) DBG | Closing plugin on server side
I0908 10:44:36.836332  762264 main.go:141] libmachine: Making call to close driver server
I0908 10:44:36.836360  762264 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:36.836619  762264 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:36.836635  762264 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 10:44:36.836654  762264 main.go:141] libmachine: (functional-461050) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-461050 image ls --format yaml --alsologtostderr:
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 7a43f2a24abe53dc764bc8c9ae3aa72436c563b2ace24112bd0eb23dca07d978
repoDigests:
- localhost/minikube-local-cache-test@sha256:a79b6e083767e214f8b8e8bbd52d39e8faa6452db07336f62e42503e10937bb0
repoTags:
- localhost/minikube-local-cache-test:functional-461050
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-461050
size: "4945146"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-461050 image ls --format yaml --alsologtostderr:
I0908 10:44:31.685965  762146 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:31.686190  762146 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:31.686199  762146 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:31.686203  762146 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:31.686389  762146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
I0908 10:44:31.686915  762146 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:31.686999  762146 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:31.687323  762146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:31.687376  762146 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:31.703173  762146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39119
I0908 10:44:31.703685  762146 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:31.704282  762146 main.go:141] libmachine: Using API Version  1
I0908 10:44:31.704310  762146 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:31.704649  762146 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:31.704861  762146 main.go:141] libmachine: (functional-461050) Calling .GetState
I0908 10:44:31.706831  762146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:31.706882  762146 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:31.722419  762146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
I0908 10:44:31.722859  762146 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:31.723283  762146 main.go:141] libmachine: Using API Version  1
I0908 10:44:31.723308  762146 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:31.723691  762146 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:31.723902  762146 main.go:141] libmachine: (functional-461050) Calling .DriverName
I0908 10:44:31.724129  762146 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:31.724157  762146 main.go:141] libmachine: (functional-461050) Calling .GetSSHHostname
I0908 10:44:31.727052  762146 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:31.727459  762146 main.go:141] libmachine: (functional-461050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0f:60", ip: ""} in network mk-functional-461050: {Iface:virbr1 ExpiryTime:2025-09-08 11:41:22 +0000 UTC Type:0 Mac:52:54:00:11:0f:60 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:functional-461050 Clientid:01:52:54:00:11:0f:60}
I0908 10:44:31.727490  762146 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined IP address 192.168.39.94 and MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:31.727585  762146 main.go:141] libmachine: (functional-461050) Calling .GetSSHPort
I0908 10:44:31.727760  762146 main.go:141] libmachine: (functional-461050) Calling .GetSSHKeyPath
I0908 10:44:31.727915  762146 main.go:141] libmachine: (functional-461050) Calling .GetSSHUsername
I0908 10:44:31.728031  762146 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/functional-461050/id_rsa Username:docker}
I0908 10:44:31.809255  762146 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 10:44:31.869154  762146 main.go:141] libmachine: Making call to close driver server
I0908 10:44:31.869172  762146 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:31.869492  762146 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:31.869547  762146 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 10:44:31.869561  762146 main.go:141] libmachine: Making call to close driver server
I0908 10:44:31.869516  762146 main.go:141] libmachine: (functional-461050) DBG | Closing plugin on server side
I0908 10:44:31.869573  762146 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:31.869905  762146 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:31.869936  762146 main.go:141] libmachine: (functional-461050) DBG | Closing plugin on server side
I0908 10:44:31.869957  762146 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-461050 ssh pgrep buildkitd: exit status 1 (197.562548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image build -t localhost/my-image:functional-461050 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 image build -t localhost/my-image:functional-461050 testdata/build --alsologtostderr: (4.337171946s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-461050 image build -t localhost/my-image:functional-461050 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c21f7702770
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-461050
--> b0a466b2113
Successfully tagged localhost/my-image:functional-461050
b0a466b21130fd45231670f1e1cc09e887eba7d325d3e618bf2bb5ba50440ff5
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-461050 image build -t localhost/my-image:functional-461050 testdata/build --alsologtostderr:
I0908 10:44:32.118755  762200 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:32.118998  762200 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:32.119009  762200 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:32.119014  762200 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:32.119234  762200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
I0908 10:44:32.119763  762200 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:32.120613  762200 config.go:182] Loaded profile config "functional-461050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:32.120936  762200 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:32.120976  762200 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:32.136222  762200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46245
I0908 10:44:32.136729  762200 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:32.137277  762200 main.go:141] libmachine: Using API Version  1
I0908 10:44:32.137307  762200 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:32.137676  762200 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:32.137890  762200 main.go:141] libmachine: (functional-461050) Calling .GetState
I0908 10:44:32.139626  762200 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
I0908 10:44:32.139672  762200 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 10:44:32.154432  762200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
I0908 10:44:32.154885  762200 main.go:141] libmachine: () Calling .GetVersion
I0908 10:44:32.155360  762200 main.go:141] libmachine: Using API Version  1
I0908 10:44:32.155390  762200 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 10:44:32.155703  762200 main.go:141] libmachine: () Calling .GetMachineName
I0908 10:44:32.155894  762200 main.go:141] libmachine: (functional-461050) Calling .DriverName
I0908 10:44:32.156106  762200 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:32.156138  762200 main.go:141] libmachine: (functional-461050) Calling .GetSSHHostname
I0908 10:44:32.158801  762200 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:32.159185  762200 main.go:141] libmachine: (functional-461050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0f:60", ip: ""} in network mk-functional-461050: {Iface:virbr1 ExpiryTime:2025-09-08 11:41:22 +0000 UTC Type:0 Mac:52:54:00:11:0f:60 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:functional-461050 Clientid:01:52:54:00:11:0f:60}
I0908 10:44:32.159239  762200 main.go:141] libmachine: (functional-461050) DBG | domain functional-461050 has defined IP address 192.168.39.94 and MAC address 52:54:00:11:0f:60 in network mk-functional-461050
I0908 10:44:32.159367  762200 main.go:141] libmachine: (functional-461050) Calling .GetSSHPort
I0908 10:44:32.159539  762200 main.go:141] libmachine: (functional-461050) Calling .GetSSHKeyPath
I0908 10:44:32.159704  762200 main.go:141] libmachine: (functional-461050) Calling .GetSSHUsername
I0908 10:44:32.159886  762200 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/functional-461050/id_rsa Username:docker}
I0908 10:44:32.240249  762200 build_images.go:161] Building image from path: /tmp/build.1889070093.tar
I0908 10:44:32.240319  762200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 10:44:32.253284  762200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1889070093.tar
I0908 10:44:32.258186  762200 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1889070093.tar: stat -c "%s %y" /var/lib/minikube/build/build.1889070093.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1889070093.tar': No such file or directory
I0908 10:44:32.258221  762200 ssh_runner.go:362] scp /tmp/build.1889070093.tar --> /var/lib/minikube/build/build.1889070093.tar (3072 bytes)
I0908 10:44:32.288255  762200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1889070093
I0908 10:44:32.301696  762200 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1889070093 -xf /var/lib/minikube/build/build.1889070093.tar
I0908 10:44:32.313798  762200 crio.go:315] Building image: /var/lib/minikube/build/build.1889070093
I0908 10:44:32.313897  762200 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-461050 /var/lib/minikube/build/build.1889070093 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0908 10:44:36.377790  762200 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-461050 /var/lib/minikube/build/build.1889070093 --cgroup-manager=cgroupfs: (4.063856706s)
I0908 10:44:36.377866  762200 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1889070093
I0908 10:44:36.393663  762200 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1889070093.tar
I0908 10:44:36.405565  762200 build_images.go:217] Built localhost/my-image:functional-461050 from /tmp/build.1889070093.tar
I0908 10:44:36.405611  762200 build_images.go:133] succeeded building to: functional-461050
I0908 10:44:36.405618  762200 build_images.go:134] failed building to: 
I0908 10:44:36.405653  762200 main.go:141] libmachine: Making call to close driver server
I0908 10:44:36.405673  762200 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:36.405965  762200 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:36.405989  762200 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 10:44:36.406000  762200 main.go:141] libmachine: (functional-461050) DBG | Closing plugin on server side
I0908 10:44:36.406004  762200 main.go:141] libmachine: Making call to close driver server
I0908 10:44:36.406085  762200 main.go:141] libmachine: (functional-461050) Calling .Close
I0908 10:44:36.406348  762200 main.go:141] libmachine: Successfully made call to close driver server
I0908 10:44:36.406364  762200 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (3.445145805s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-461050
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr: (1.321652166s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr: (2.732592253s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.670952781s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-461050
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image load --daemon kicbase/echo-server:functional-461050 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image save kicbase/echo-server:functional-461050 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image rm kicbase/echo-server:functional-461050 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-461050
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-461050 image save --daemon kicbase/echo-server:functional-461050 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-461050
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-461050
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-461050
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-461050
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (231.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m50.954507389s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (231.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- rollout status deployment/busybox
E0908 10:58:30.883291  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 kubectl -- rollout status deployment/busybox: (8.346163391s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-45w4l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-g74tk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-kzn54 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-45w4l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-g74tk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-kzn54 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-45w4l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-g74tk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-kzn54 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-45w4l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-45w4l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-g74tk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-g74tk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-kzn54 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 kubectl -- exec busybox-7b57f96db7-kzn54 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 node add --alsologtostderr -v 5
E0908 10:58:56.712526  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:56.718874  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:56.730342  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:56.751746  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:56.793252  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:56.874867  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:57.036484  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:57.358573  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:58.000525  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:59.282337  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:59:01.844445  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:59:06.965889  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:59:17.208295  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 node add --alsologtostderr -v 5: (49.629098138s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-226312 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp testdata/cp-test.txt ha-226312:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1822380164/001/cp-test_ha-226312.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312:/home/docker/cp-test.txt ha-226312-m02:/home/docker/cp-test_ha-226312_ha-226312-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m02 "sudo cat /home/docker/cp-test_ha-226312_ha-226312-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312:/home/docker/cp-test.txt ha-226312-m03:/home/docker/cp-test_ha-226312_ha-226312-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m03 "sudo cat /home/docker/cp-test_ha-226312_ha-226312-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312:/home/docker/cp-test.txt ha-226312-m04:/home/docker/cp-test_ha-226312_ha-226312-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m04 "sudo cat /home/docker/cp-test_ha-226312_ha-226312-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp testdata/cp-test.txt ha-226312-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1822380164/001/cp-test_ha-226312-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m02:/home/docker/cp-test.txt ha-226312:/home/docker/cp-test_ha-226312-m02_ha-226312.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312 "sudo cat /home/docker/cp-test_ha-226312-m02_ha-226312.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m02:/home/docker/cp-test.txt ha-226312-m03:/home/docker/cp-test_ha-226312-m02_ha-226312-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m03 "sudo cat /home/docker/cp-test_ha-226312-m02_ha-226312-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m02:/home/docker/cp-test.txt ha-226312-m04:/home/docker/cp-test_ha-226312-m02_ha-226312-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m04 "sudo cat /home/docker/cp-test_ha-226312-m02_ha-226312-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp testdata/cp-test.txt ha-226312-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1822380164/001/cp-test_ha-226312-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m03:/home/docker/cp-test.txt ha-226312:/home/docker/cp-test_ha-226312-m03_ha-226312.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312 "sudo cat /home/docker/cp-test_ha-226312-m03_ha-226312.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m03:/home/docker/cp-test.txt ha-226312-m02:/home/docker/cp-test_ha-226312-m03_ha-226312-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m02 "sudo cat /home/docker/cp-test_ha-226312-m03_ha-226312-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m03:/home/docker/cp-test.txt ha-226312-m04:/home/docker/cp-test_ha-226312-m03_ha-226312-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m04 "sudo cat /home/docker/cp-test_ha-226312-m03_ha-226312-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp testdata/cp-test.txt ha-226312-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1822380164/001/cp-test_ha-226312-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m04:/home/docker/cp-test.txt ha-226312:/home/docker/cp-test_ha-226312-m04_ha-226312.txt
E0908 10:59:37.689955  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312 "sudo cat /home/docker/cp-test_ha-226312-m04_ha-226312.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m04:/home/docker/cp-test.txt ha-226312-m02:/home/docker/cp-test_ha-226312-m04_ha-226312-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m02 "sudo cat /home/docker/cp-test_ha-226312-m04_ha-226312-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 cp ha-226312-m04:/home/docker/cp-test.txt ha-226312-m03:/home/docker/cp-test_ha-226312-m04_ha-226312-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 ssh -n ha-226312-m03 "sudo cat /home/docker/cp-test_ha-226312-m04_ha-226312-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 node stop m02 --alsologtostderr -v 5
E0908 10:59:53.951842  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:00:18.651873  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 node stop m02 --alsologtostderr -v 5: (1m31.006877772s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5: exit status 7 (698.739244ms)

                                                
                                                
-- stdout --
	ha-226312
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-226312-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-226312-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-226312-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:01:10.931832  769311 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:01:10.932121  769311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:01:10.932131  769311 out.go:374] Setting ErrFile to fd 2...
	I0908 11:01:10.932136  769311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:01:10.932360  769311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 11:01:10.932611  769311 out.go:368] Setting JSON to false
	I0908 11:01:10.932648  769311 mustload.go:65] Loading cluster: ha-226312
	I0908 11:01:10.932788  769311 notify.go:220] Checking for updates...
	I0908 11:01:10.933144  769311 config.go:182] Loaded profile config "ha-226312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:01:10.933170  769311 status.go:174] checking status of ha-226312 ...
	I0908 11:01:10.933841  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:10.933905  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:10.955620  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
	I0908 11:01:10.956237  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:10.956810  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:10.956836  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:10.957205  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:10.957448  769311 main.go:141] libmachine: (ha-226312) Calling .GetState
	I0908 11:01:10.959079  769311 status.go:371] ha-226312 host status = "Running" (err=<nil>)
	I0908 11:01:10.959109  769311 host.go:66] Checking if "ha-226312" exists ...
	I0908 11:01:10.959536  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:10.959588  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:10.974661  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0908 11:01:10.975173  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:10.975663  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:10.975684  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:10.976012  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:10.976228  769311 main.go:141] libmachine: (ha-226312) Calling .GetIP
	I0908 11:01:10.979209  769311 main.go:141] libmachine: (ha-226312) DBG | domain ha-226312 has defined MAC address 52:54:00:ee:b2:33 in network mk-ha-226312
	I0908 11:01:10.979593  769311 main.go:141] libmachine: (ha-226312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b2:33", ip: ""} in network mk-ha-226312: {Iface:virbr1 ExpiryTime:2025-09-08 11:54:46 +0000 UTC Type:0 Mac:52:54:00:ee:b2:33 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-226312 Clientid:01:52:54:00:ee:b2:33}
	I0908 11:01:10.979618  769311 main.go:141] libmachine: (ha-226312) DBG | domain ha-226312 has defined IP address 192.168.39.11 and MAC address 52:54:00:ee:b2:33 in network mk-ha-226312
	I0908 11:01:10.979763  769311 host.go:66] Checking if "ha-226312" exists ...
	I0908 11:01:10.980053  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:10.980088  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:10.995101  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0908 11:01:10.995517  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:10.995913  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:10.995931  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:10.996258  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:10.996449  769311 main.go:141] libmachine: (ha-226312) Calling .DriverName
	I0908 11:01:10.996647  769311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:01:10.996673  769311 main.go:141] libmachine: (ha-226312) Calling .GetSSHHostname
	I0908 11:01:10.999311  769311 main.go:141] libmachine: (ha-226312) DBG | domain ha-226312 has defined MAC address 52:54:00:ee:b2:33 in network mk-ha-226312
	I0908 11:01:10.999773  769311 main.go:141] libmachine: (ha-226312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b2:33", ip: ""} in network mk-ha-226312: {Iface:virbr1 ExpiryTime:2025-09-08 11:54:46 +0000 UTC Type:0 Mac:52:54:00:ee:b2:33 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-226312 Clientid:01:52:54:00:ee:b2:33}
	I0908 11:01:10.999802  769311 main.go:141] libmachine: (ha-226312) DBG | domain ha-226312 has defined IP address 192.168.39.11 and MAC address 52:54:00:ee:b2:33 in network mk-ha-226312
	I0908 11:01:10.999937  769311 main.go:141] libmachine: (ha-226312) Calling .GetSSHPort
	I0908 11:01:11.000082  769311 main.go:141] libmachine: (ha-226312) Calling .GetSSHKeyPath
	I0908 11:01:11.000254  769311 main.go:141] libmachine: (ha-226312) Calling .GetSSHUsername
	I0908 11:01:11.000411  769311 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/ha-226312/id_rsa Username:docker}
	I0908 11:01:11.087648  769311 ssh_runner.go:195] Run: systemctl --version
	I0908 11:01:11.097529  769311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:01:11.116494  769311 kubeconfig.go:125] found "ha-226312" server: "https://192.168.39.254:8443"
	I0908 11:01:11.116533  769311 api_server.go:166] Checking apiserver status ...
	I0908 11:01:11.116575  769311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:01:11.139730  769311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup
	W0908 11:01:11.152950  769311 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:01:11.153026  769311 ssh_runner.go:195] Run: ls
	I0908 11:01:11.158806  769311 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0908 11:01:11.166953  769311 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0908 11:01:11.166979  769311 status.go:463] ha-226312 apiserver status = Running (err=<nil>)
	I0908 11:01:11.166990  769311 status.go:176] ha-226312 status: &{Name:ha-226312 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:01:11.167011  769311 status.go:174] checking status of ha-226312-m02 ...
	I0908 11:01:11.167320  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:11.167375  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:11.182560  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37901
	I0908 11:01:11.182960  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:11.183397  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:11.183428  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:11.183793  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:11.184001  769311 main.go:141] libmachine: (ha-226312-m02) Calling .GetState
	I0908 11:01:11.185784  769311 status.go:371] ha-226312-m02 host status = "Stopped" (err=<nil>)
	I0908 11:01:11.185799  769311 status.go:384] host is not running, skipping remaining checks
	I0908 11:01:11.185804  769311 status.go:176] ha-226312-m02 status: &{Name:ha-226312-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:01:11.185820  769311 status.go:174] checking status of ha-226312-m03 ...
	I0908 11:01:11.186211  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:11.186281  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:11.201474  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0908 11:01:11.201940  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:11.202470  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:11.202490  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:11.202783  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:11.202980  769311 main.go:141] libmachine: (ha-226312-m03) Calling .GetState
	I0908 11:01:11.204571  769311 status.go:371] ha-226312-m03 host status = "Running" (err=<nil>)
	I0908 11:01:11.204597  769311 host.go:66] Checking if "ha-226312-m03" exists ...
	I0908 11:01:11.204889  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:11.204937  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:11.220693  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0908 11:01:11.221152  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:11.221647  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:11.221674  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:11.222019  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:11.222218  769311 main.go:141] libmachine: (ha-226312-m03) Calling .GetIP
	I0908 11:01:11.225356  769311 main.go:141] libmachine: (ha-226312-m03) DBG | domain ha-226312-m03 has defined MAC address 52:54:00:04:29:84 in network mk-ha-226312
	I0908 11:01:11.226095  769311 main.go:141] libmachine: (ha-226312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:29:84", ip: ""} in network mk-ha-226312: {Iface:virbr1 ExpiryTime:2025-09-08 11:57:07 +0000 UTC Type:0 Mac:52:54:00:04:29:84 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-226312-m03 Clientid:01:52:54:00:04:29:84}
	I0908 11:01:11.226131  769311 main.go:141] libmachine: (ha-226312-m03) DBG | domain ha-226312-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:04:29:84 in network mk-ha-226312
	I0908 11:01:11.226324  769311 host.go:66] Checking if "ha-226312-m03" exists ...
	I0908 11:01:11.226827  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:11.226879  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:11.243815  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0908 11:01:11.244316  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:11.244803  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:11.244830  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:11.245200  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:11.245431  769311 main.go:141] libmachine: (ha-226312-m03) Calling .DriverName
	I0908 11:01:11.245661  769311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:01:11.245686  769311 main.go:141] libmachine: (ha-226312-m03) Calling .GetSSHHostname
	I0908 11:01:11.248341  769311 main.go:141] libmachine: (ha-226312-m03) DBG | domain ha-226312-m03 has defined MAC address 52:54:00:04:29:84 in network mk-ha-226312
	I0908 11:01:11.248720  769311 main.go:141] libmachine: (ha-226312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:29:84", ip: ""} in network mk-ha-226312: {Iface:virbr1 ExpiryTime:2025-09-08 11:57:07 +0000 UTC Type:0 Mac:52:54:00:04:29:84 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-226312-m03 Clientid:01:52:54:00:04:29:84}
	I0908 11:01:11.248750  769311 main.go:141] libmachine: (ha-226312-m03) DBG | domain ha-226312-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:04:29:84 in network mk-ha-226312
	I0908 11:01:11.248832  769311 main.go:141] libmachine: (ha-226312-m03) Calling .GetSSHPort
	I0908 11:01:11.248975  769311 main.go:141] libmachine: (ha-226312-m03) Calling .GetSSHKeyPath
	I0908 11:01:11.249132  769311 main.go:141] libmachine: (ha-226312-m03) Calling .GetSSHUsername
	I0908 11:01:11.249270  769311 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/ha-226312-m03/id_rsa Username:docker}
	I0908 11:01:11.340908  769311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:01:11.360159  769311 kubeconfig.go:125] found "ha-226312" server: "https://192.168.39.254:8443"
	I0908 11:01:11.360190  769311 api_server.go:166] Checking apiserver status ...
	I0908 11:01:11.360229  769311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:01:11.383410  769311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1824/cgroup
	W0908 11:01:11.395543  769311 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1824/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:01:11.395604  769311 ssh_runner.go:195] Run: ls
	I0908 11:01:11.401517  769311 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0908 11:01:11.407022  769311 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0908 11:01:11.407046  769311 status.go:463] ha-226312-m03 apiserver status = Running (err=<nil>)
	I0908 11:01:11.407055  769311 status.go:176] ha-226312-m03 status: &{Name:ha-226312-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:01:11.407070  769311 status.go:174] checking status of ha-226312-m04 ...
	I0908 11:01:11.407420  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:11.407464  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:11.423267  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0908 11:01:11.423737  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:11.424202  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:11.424223  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:11.424579  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:11.424768  769311 main.go:141] libmachine: (ha-226312-m04) Calling .GetState
	I0908 11:01:11.426205  769311 status.go:371] ha-226312-m04 host status = "Running" (err=<nil>)
	I0908 11:01:11.426222  769311 host.go:66] Checking if "ha-226312-m04" exists ...
	I0908 11:01:11.426507  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:11.426541  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:11.444144  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44821
	I0908 11:01:11.444568  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:11.445015  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:11.445039  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:11.445451  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:11.445654  769311 main.go:141] libmachine: (ha-226312-m04) Calling .GetIP
	I0908 11:01:11.448352  769311 main.go:141] libmachine: (ha-226312-m04) DBG | domain ha-226312-m04 has defined MAC address 52:54:00:98:4b:e6 in network mk-ha-226312
	I0908 11:01:11.448723  769311 main.go:141] libmachine: (ha-226312-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:4b:e6", ip: ""} in network mk-ha-226312: {Iface:virbr1 ExpiryTime:2025-09-08 11:58:51 +0000 UTC Type:0 Mac:52:54:00:98:4b:e6 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-226312-m04 Clientid:01:52:54:00:98:4b:e6}
	I0908 11:01:11.448753  769311 main.go:141] libmachine: (ha-226312-m04) DBG | domain ha-226312-m04 has defined IP address 192.168.39.243 and MAC address 52:54:00:98:4b:e6 in network mk-ha-226312
	I0908 11:01:11.448902  769311 host.go:66] Checking if "ha-226312-m04" exists ...
	I0908 11:01:11.449308  769311 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:01:11.449366  769311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:01:11.465074  769311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I0908 11:01:11.465679  769311 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:01:11.466361  769311 main.go:141] libmachine: Using API Version  1
	I0908 11:01:11.466391  769311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:01:11.466748  769311 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:01:11.466913  769311 main.go:141] libmachine: (ha-226312-m04) Calling .DriverName
	I0908 11:01:11.467068  769311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:01:11.467087  769311 main.go:141] libmachine: (ha-226312-m04) Calling .GetSSHHostname
	I0908 11:01:11.470046  769311 main.go:141] libmachine: (ha-226312-m04) DBG | domain ha-226312-m04 has defined MAC address 52:54:00:98:4b:e6 in network mk-ha-226312
	I0908 11:01:11.470486  769311 main.go:141] libmachine: (ha-226312-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:4b:e6", ip: ""} in network mk-ha-226312: {Iface:virbr1 ExpiryTime:2025-09-08 11:58:51 +0000 UTC Type:0 Mac:52:54:00:98:4b:e6 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-226312-m04 Clientid:01:52:54:00:98:4b:e6}
	I0908 11:01:11.470518  769311 main.go:141] libmachine: (ha-226312-m04) DBG | domain ha-226312-m04 has defined IP address 192.168.39.243 and MAC address 52:54:00:98:4b:e6 in network mk-ha-226312
	I0908 11:01:11.470655  769311 main.go:141] libmachine: (ha-226312-m04) Calling .GetSSHPort
	I0908 11:01:11.470813  769311 main.go:141] libmachine: (ha-226312-m04) Calling .GetSSHKeyPath
	I0908 11:01:11.470985  769311 main.go:141] libmachine: (ha-226312-m04) Calling .GetSSHUsername
	I0908 11:01:11.471150  769311 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/ha-226312-m04/id_rsa Username:docker}
	I0908 11:01:11.559478  769311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:01:11.579199  769311 status.go:176] ha-226312-m04 status: &{Name:ha-226312-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 node start m02 --alsologtostderr -v 5
E0908 11:01:40.573599  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 node start m02 --alsologtostderr -v 5: (1m0.604731681s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5: (1.187878522s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (61.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.076902614s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (411.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 stop --alsologtostderr -v 5
E0908 11:03:30.883290  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:03:56.712556  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:04:24.415322  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 stop --alsologtostderr -v 5: (4m34.735892313s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 start --wait true --alsologtostderr -v 5
E0908 11:08:30.883530  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:08:56.711797  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 start --wait true --alsologtostderr -v 5: (2m17.012654595s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (411.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 node delete m03 --alsologtostderr -v 5: (17.752092523s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 stop --alsologtostderr -v 5
E0908 11:13:30.882372  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:13:56.711818  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 stop --alsologtostderr -v 5: (4m32.674602246s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5: exit status 7 (114.697399ms)

                                                
                                                
-- stdout --
	ha-226312
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-226312-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-226312-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:13:59.038739  773378 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:13:59.038953  773378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:13:59.038961  773378 out.go:374] Setting ErrFile to fd 2...
	I0908 11:13:59.038965  773378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:13:59.039156  773378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 11:13:59.039332  773378 out.go:368] Setting JSON to false
	I0908 11:13:59.039358  773378 mustload.go:65] Loading cluster: ha-226312
	I0908 11:13:59.039479  773378 notify.go:220] Checking for updates...
	I0908 11:13:59.039775  773378 config.go:182] Loaded profile config "ha-226312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:13:59.039800  773378 status.go:174] checking status of ha-226312 ...
	I0908 11:13:59.040313  773378 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:13:59.040363  773378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:13:59.063865  773378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I0908 11:13:59.064293  773378 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:13:59.064812  773378 main.go:141] libmachine: Using API Version  1
	I0908 11:13:59.064834  773378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:13:59.065224  773378 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:13:59.065486  773378 main.go:141] libmachine: (ha-226312) Calling .GetState
	I0908 11:13:59.067252  773378 status.go:371] ha-226312 host status = "Stopped" (err=<nil>)
	I0908 11:13:59.067271  773378 status.go:384] host is not running, skipping remaining checks
	I0908 11:13:59.067279  773378 status.go:176] ha-226312 status: &{Name:ha-226312 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:13:59.067315  773378 status.go:174] checking status of ha-226312-m02 ...
	I0908 11:13:59.067620  773378 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:13:59.067656  773378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:13:59.082621  773378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0908 11:13:59.083017  773378 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:13:59.083503  773378 main.go:141] libmachine: Using API Version  1
	I0908 11:13:59.083522  773378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:13:59.083850  773378 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:13:59.084045  773378 main.go:141] libmachine: (ha-226312-m02) Calling .GetState
	I0908 11:13:59.085706  773378 status.go:371] ha-226312-m02 host status = "Stopped" (err=<nil>)
	I0908 11:13:59.085720  773378 status.go:384] host is not running, skipping remaining checks
	I0908 11:13:59.085725  773378 status.go:176] ha-226312-m02 status: &{Name:ha-226312-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:13:59.085739  773378 status.go:174] checking status of ha-226312-m04 ...
	I0908 11:13:59.086009  773378 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:13:59.086046  773378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:13:59.100961  773378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0908 11:13:59.101342  773378 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:13:59.101754  773378 main.go:141] libmachine: Using API Version  1
	I0908 11:13:59.101774  773378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:13:59.102089  773378 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:13:59.102287  773378 main.go:141] libmachine: (ha-226312-m04) Calling .GetState
	I0908 11:13:59.103758  773378 status.go:371] ha-226312-m04 host status = "Stopped" (err=<nil>)
	I0908 11:13:59.103772  773378 status.go:384] host is not running, skipping remaining checks
	I0908 11:13:59.103776  773378 status.go:176] ha-226312-m04 status: &{Name:ha-226312-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0908 11:15:19.778538  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m40.583905385s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (90.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 node add --control-plane --alsologtostderr -v 5
E0908 11:16:33.953789  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-226312 node add --control-plane --alsologtostderr -v 5: (1m29.982993422s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-226312 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (90.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-401747 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E0908 11:18:30.886231  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-401747 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m25.271420281s)
--- PASS: TestJSONOutput/start/Command (85.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-401747 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-401747 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-401747 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-401747 --output=json --user=testUser: (7.360444348s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-522625 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-522625 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (62.337481ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b317f472-e2cf-4563-8578-7790cd58141f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-522625] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a561aa2-f53a-40ff-8a2c-26a5d0fc8bdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21503"}}
	{"specversion":"1.0","id":"1fa15ec8-53c9-4efe-a642-a3e74655e3de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"55bd5f7f-aa49-44cb-b7a0-c857eede5171","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig"}}
	{"specversion":"1.0","id":"06d16f12-1689-442c-ae00-274ed45186ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube"}}
	{"specversion":"1.0","id":"255f43d8-8e67-4e40-940f-dbc991955acc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6d79ceaf-fe84-457a-910d-8c4ddc2b10ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b2c79a61-ec32-4705-8025-8d2144fee0af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-522625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-522625
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-622725 --driver=kvm2  --container-runtime=crio
E0908 11:18:56.716180  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-622725 --driver=kvm2  --container-runtime=crio: (45.135568062s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-645715 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-645715 --driver=kvm2  --container-runtime=crio: (45.079443356s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-622725
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-645715
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-645715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-645715
helpers_test.go:175: Cleaning up "first-622725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-622725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-622725: (1.051670835s)
--- PASS: TestMinikubeProfile (93.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-567709 --memory=3072 --mount-string /tmp/TestMountStartserial759981628/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-567709 --memory=3072 --mount-string /tmp/TestMountStartserial759981628/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.395376606s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-567709 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-567709 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-579887 --memory=3072 --mount-string /tmp/TestMountStartserial759981628/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-579887 --memory=3072 --mount-string /tmp/TestMountStartserial759981628/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.880841059s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579887 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579887 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-567709 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579887 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579887 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.76s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-579887
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-579887: (1.754930876s)
--- PASS: TestMountStart/serial/Stop (1.76s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-579887
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-579887: (22.158323851s)
--- PASS: TestMountStart/serial/RestartStopped (23.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579887 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579887 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-064020 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0908 11:23:30.883028  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-064020 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.572225014s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- rollout status deployment/busybox
E0908 11:23:56.712366  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-064020 -- rollout status deployment/busybox: (8.05177299s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-ts8fm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-xwj42 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-ts8fm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-xwj42 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-ts8fm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-xwj42 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-ts8fm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-ts8fm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-xwj42 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-064020 -- exec busybox-7b57f96db7-xwj42 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-064020 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-064020 -v=5 --alsologtostderr: (52.060087552s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-064020 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp testdata/cp-test.txt multinode-064020:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp multinode-064020:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile858557521/001/cp-test_multinode-064020.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp multinode-064020:/home/docker/cp-test.txt multinode-064020-m02:/home/docker/cp-test_multinode-064020_multinode-064020-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m02 "sudo cat /home/docker/cp-test_multinode-064020_multinode-064020-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp multinode-064020:/home/docker/cp-test.txt multinode-064020-m03:/home/docker/cp-test_multinode-064020_multinode-064020-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m03 "sudo cat /home/docker/cp-test_multinode-064020_multinode-064020-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp testdata/cp-test.txt multinode-064020-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp multinode-064020-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile858557521/001/cp-test_multinode-064020-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp multinode-064020-m02:/home/docker/cp-test.txt multinode-064020:/home/docker/cp-test_multinode-064020-m02_multinode-064020.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020 "sudo cat /home/docker/cp-test_multinode-064020-m02_multinode-064020.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp multinode-064020-m02:/home/docker/cp-test.txt multinode-064020-m03:/home/docker/cp-test_multinode-064020-m02_multinode-064020-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m03 "sudo cat /home/docker/cp-test_multinode-064020-m02_multinode-064020-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp testdata/cp-test.txt multinode-064020-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp multinode-064020-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile858557521/001/cp-test_multinode-064020-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp multinode-064020-m03:/home/docker/cp-test.txt multinode-064020:/home/docker/cp-test_multinode-064020-m03_multinode-064020.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020 "sudo cat /home/docker/cp-test_multinode-064020-m03_multinode-064020.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 cp multinode-064020-m03:/home/docker/cp-test.txt multinode-064020-m02:/home/docker/cp-test_multinode-064020-m03_multinode-064020-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 ssh -n multinode-064020-m02 "sudo cat /home/docker/cp-test_multinode-064020-m03_multinode-064020-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-064020 node stop m03: (2.303201844s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-064020 status: exit status 7 (441.31193ms)

                                                
                                                
-- stdout --
	multinode-064020
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-064020-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-064020-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-064020 status --alsologtostderr: exit status 7 (432.16203ms)

                                                
                                                
-- stdout --
	multinode-064020
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-064020-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-064020-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:25:02.820143  781664 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:25:02.820255  781664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:25:02.820267  781664 out.go:374] Setting ErrFile to fd 2...
	I0908 11:25:02.820273  781664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:25:02.820485  781664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 11:25:02.820650  781664 out.go:368] Setting JSON to false
	I0908 11:25:02.820681  781664 mustload.go:65] Loading cluster: multinode-064020
	I0908 11:25:02.820745  781664 notify.go:220] Checking for updates...
	I0908 11:25:02.821053  781664 config.go:182] Loaded profile config "multinode-064020": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:25:02.821074  781664 status.go:174] checking status of multinode-064020 ...
	I0908 11:25:02.821526  781664 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:25:02.821568  781664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:25:02.838633  781664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I0908 11:25:02.839146  781664 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:25:02.839835  781664 main.go:141] libmachine: Using API Version  1
	I0908 11:25:02.839869  781664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:25:02.840212  781664 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:25:02.840429  781664 main.go:141] libmachine: (multinode-064020) Calling .GetState
	I0908 11:25:02.842165  781664 status.go:371] multinode-064020 host status = "Running" (err=<nil>)
	I0908 11:25:02.842182  781664 host.go:66] Checking if "multinode-064020" exists ...
	I0908 11:25:02.842517  781664 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:25:02.842574  781664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:25:02.858215  781664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38231
	I0908 11:25:02.858704  781664 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:25:02.859174  781664 main.go:141] libmachine: Using API Version  1
	I0908 11:25:02.859192  781664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:25:02.859521  781664 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:25:02.859680  781664 main.go:141] libmachine: (multinode-064020) Calling .GetIP
	I0908 11:25:02.862256  781664 main.go:141] libmachine: (multinode-064020) DBG | domain multinode-064020 has defined MAC address 52:54:00:18:f0:79 in network mk-multinode-064020
	I0908 11:25:02.862741  781664 main.go:141] libmachine: (multinode-064020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:f0:79", ip: ""} in network mk-multinode-064020: {Iface:virbr1 ExpiryTime:2025-09-08 12:22:08 +0000 UTC Type:0 Mac:52:54:00:18:f0:79 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-064020 Clientid:01:52:54:00:18:f0:79}
	I0908 11:25:02.862768  781664 main.go:141] libmachine: (multinode-064020) DBG | domain multinode-064020 has defined IP address 192.168.39.65 and MAC address 52:54:00:18:f0:79 in network mk-multinode-064020
	I0908 11:25:02.862888  781664 host.go:66] Checking if "multinode-064020" exists ...
	I0908 11:25:02.863157  781664 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:25:02.863195  781664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:25:02.878786  781664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0908 11:25:02.879183  781664 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:25:02.879595  781664 main.go:141] libmachine: Using API Version  1
	I0908 11:25:02.879618  781664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:25:02.879996  781664 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:25:02.880184  781664 main.go:141] libmachine: (multinode-064020) Calling .DriverName
	I0908 11:25:02.880446  781664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:25:02.880479  781664 main.go:141] libmachine: (multinode-064020) Calling .GetSSHHostname
	I0908 11:25:02.883192  781664 main.go:141] libmachine: (multinode-064020) DBG | domain multinode-064020 has defined MAC address 52:54:00:18:f0:79 in network mk-multinode-064020
	I0908 11:25:02.883640  781664 main.go:141] libmachine: (multinode-064020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:f0:79", ip: ""} in network mk-multinode-064020: {Iface:virbr1 ExpiryTime:2025-09-08 12:22:08 +0000 UTC Type:0 Mac:52:54:00:18:f0:79 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-064020 Clientid:01:52:54:00:18:f0:79}
	I0908 11:25:02.883666  781664 main.go:141] libmachine: (multinode-064020) DBG | domain multinode-064020 has defined IP address 192.168.39.65 and MAC address 52:54:00:18:f0:79 in network mk-multinode-064020
	I0908 11:25:02.883780  781664 main.go:141] libmachine: (multinode-064020) Calling .GetSSHPort
	I0908 11:25:02.883956  781664 main.go:141] libmachine: (multinode-064020) Calling .GetSSHKeyPath
	I0908 11:25:02.884084  781664 main.go:141] libmachine: (multinode-064020) Calling .GetSSHUsername
	I0908 11:25:02.884284  781664 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/multinode-064020/id_rsa Username:docker}
	I0908 11:25:02.961843  781664 ssh_runner.go:195] Run: systemctl --version
	I0908 11:25:02.968253  781664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:25:02.985761  781664 kubeconfig.go:125] found "multinode-064020" server: "https://192.168.39.65:8443"
	I0908 11:25:02.985797  781664 api_server.go:166] Checking apiserver status ...
	I0908 11:25:02.985828  781664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:25:03.004800  781664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0908 11:25:03.015933  781664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:25:03.015992  781664 ssh_runner.go:195] Run: ls
	I0908 11:25:03.020997  781664 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0908 11:25:03.025654  781664 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I0908 11:25:03.025675  781664 status.go:463] multinode-064020 apiserver status = Running (err=<nil>)
	I0908 11:25:03.025686  781664 status.go:176] multinode-064020 status: &{Name:multinode-064020 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:25:03.025705  781664 status.go:174] checking status of multinode-064020-m02 ...
	I0908 11:25:03.025982  781664 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:25:03.026014  781664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:25:03.041695  781664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0908 11:25:03.042152  781664 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:25:03.042593  781664 main.go:141] libmachine: Using API Version  1
	I0908 11:25:03.042615  781664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:25:03.043074  781664 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:25:03.043292  781664 main.go:141] libmachine: (multinode-064020-m02) Calling .GetState
	I0908 11:25:03.044801  781664 status.go:371] multinode-064020-m02 host status = "Running" (err=<nil>)
	I0908 11:25:03.044820  781664 host.go:66] Checking if "multinode-064020-m02" exists ...
	I0908 11:25:03.045111  781664 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:25:03.045151  781664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:25:03.060278  781664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0908 11:25:03.060663  781664 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:25:03.061102  781664 main.go:141] libmachine: Using API Version  1
	I0908 11:25:03.061127  781664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:25:03.061522  781664 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:25:03.061710  781664 main.go:141] libmachine: (multinode-064020-m02) Calling .GetIP
	I0908 11:25:03.064868  781664 main.go:141] libmachine: (multinode-064020-m02) DBG | domain multinode-064020-m02 has defined MAC address 52:54:00:bc:40:6b in network mk-multinode-064020
	I0908 11:25:03.065352  781664 main.go:141] libmachine: (multinode-064020-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:40:6b", ip: ""} in network mk-multinode-064020: {Iface:virbr1 ExpiryTime:2025-09-08 12:23:10 +0000 UTC Type:0 Mac:52:54:00:bc:40:6b Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:multinode-064020-m02 Clientid:01:52:54:00:bc:40:6b}
	I0908 11:25:03.065387  781664 main.go:141] libmachine: (multinode-064020-m02) DBG | domain multinode-064020-m02 has defined IP address 192.168.39.7 and MAC address 52:54:00:bc:40:6b in network mk-multinode-064020
	I0908 11:25:03.065522  781664 host.go:66] Checking if "multinode-064020-m02" exists ...
	I0908 11:25:03.065824  781664 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:25:03.065869  781664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:25:03.081345  781664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33241
	I0908 11:25:03.081744  781664 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:25:03.082237  781664 main.go:141] libmachine: Using API Version  1
	I0908 11:25:03.082260  781664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:25:03.082594  781664 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:25:03.082777  781664 main.go:141] libmachine: (multinode-064020-m02) Calling .DriverName
	I0908 11:25:03.083006  781664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:25:03.083029  781664 main.go:141] libmachine: (multinode-064020-m02) Calling .GetSSHHostname
	I0908 11:25:03.085846  781664 main.go:141] libmachine: (multinode-064020-m02) DBG | domain multinode-064020-m02 has defined MAC address 52:54:00:bc:40:6b in network mk-multinode-064020
	I0908 11:25:03.086294  781664 main.go:141] libmachine: (multinode-064020-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:40:6b", ip: ""} in network mk-multinode-064020: {Iface:virbr1 ExpiryTime:2025-09-08 12:23:10 +0000 UTC Type:0 Mac:52:54:00:bc:40:6b Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:multinode-064020-m02 Clientid:01:52:54:00:bc:40:6b}
	I0908 11:25:03.086320  781664 main.go:141] libmachine: (multinode-064020-m02) DBG | domain multinode-064020-m02 has defined IP address 192.168.39.7 and MAC address 52:54:00:bc:40:6b in network mk-multinode-064020
	I0908 11:25:03.086471  781664 main.go:141] libmachine: (multinode-064020-m02) Calling .GetSSHPort
	I0908 11:25:03.086639  781664 main.go:141] libmachine: (multinode-064020-m02) Calling .GetSSHKeyPath
	I0908 11:25:03.086791  781664 main.go:141] libmachine: (multinode-064020-m02) Calling .GetSSHUsername
	I0908 11:25:03.086914  781664 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21503-748170/.minikube/machines/multinode-064020-m02/id_rsa Username:docker}
	I0908 11:25:03.169447  781664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:25:03.185176  781664 status.go:176] multinode-064020-m02 status: &{Name:multinode-064020-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:25:03.185224  781664 status.go:174] checking status of multinode-064020-m03 ...
	I0908 11:25:03.185727  781664 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:25:03.185792  781664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:25:03.201676  781664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38491
	I0908 11:25:03.202202  781664 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:25:03.202754  781664 main.go:141] libmachine: Using API Version  1
	I0908 11:25:03.202776  781664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:25:03.203074  781664 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:25:03.203253  781664 main.go:141] libmachine: (multinode-064020-m03) Calling .GetState
	I0908 11:25:03.204737  781664 status.go:371] multinode-064020-m03 host status = "Stopped" (err=<nil>)
	I0908 11:25:03.204750  781664 status.go:384] host is not running, skipping remaining checks
	I0908 11:25:03.204756  781664 status.go:176] multinode-064020-m03 status: &{Name:multinode-064020-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-064020 node start m03 -v=5 --alsologtostderr: (43.569546135s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (44.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (336.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-064020
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-064020
E0908 11:28:30.885556  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-064020: (3m3.944838527s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-064020 --wait=true -v=5 --alsologtostderr
E0908 11:28:56.712436  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-064020 --wait=true -v=5 --alsologtostderr: (2m32.544205044s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-064020
--- PASS: TestMultiNode/serial/RestartKeepsNodes (336.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-064020 node delete m03: (2.374502858s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 stop
E0908 11:31:59.782930  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:33:13.957530  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:33:30.885385  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:33:56.717278  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-064020 stop: (3m1.738790567s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-064020 status: exit status 7 (97.768507ms)

                                                
                                                
-- stdout --
	multinode-064020
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-064020-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-064020 status --alsologtostderr: exit status 7 (88.019528ms)

                                                
                                                
-- stdout --
	multinode-064020
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-064020-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:34:28.838371  784631 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:34:28.838630  784631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:34:28.838639  784631 out.go:374] Setting ErrFile to fd 2...
	I0908 11:34:28.838643  784631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:34:28.838866  784631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 11:34:28.839020  784631 out.go:368] Setting JSON to false
	I0908 11:34:28.839048  784631 mustload.go:65] Loading cluster: multinode-064020
	I0908 11:34:28.839137  784631 notify.go:220] Checking for updates...
	I0908 11:34:28.839415  784631 config.go:182] Loaded profile config "multinode-064020": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:34:28.839436  784631 status.go:174] checking status of multinode-064020 ...
	I0908 11:34:28.839838  784631 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:34:28.839879  784631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:34:28.855851  784631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35021
	I0908 11:34:28.856376  784631 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:34:28.857071  784631 main.go:141] libmachine: Using API Version  1
	I0908 11:34:28.857122  784631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:34:28.857527  784631 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:34:28.857735  784631 main.go:141] libmachine: (multinode-064020) Calling .GetState
	I0908 11:34:28.859422  784631 status.go:371] multinode-064020 host status = "Stopped" (err=<nil>)
	I0908 11:34:28.859440  784631 status.go:384] host is not running, skipping remaining checks
	I0908 11:34:28.859447  784631 status.go:176] multinode-064020 status: &{Name:multinode-064020 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:34:28.859491  784631 status.go:174] checking status of multinode-064020-m02 ...
	I0908 11:34:28.859929  784631 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21503-748170/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:34:28.859982  784631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:34:28.875029  784631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0908 11:34:28.875468  784631 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:34:28.875864  784631 main.go:141] libmachine: Using API Version  1
	I0908 11:34:28.875885  784631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:34:28.876213  784631 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:34:28.876414  784631 main.go:141] libmachine: (multinode-064020-m02) Calling .GetState
	I0908 11:34:28.877989  784631 status.go:371] multinode-064020-m02 host status = "Stopped" (err=<nil>)
	I0908 11:34:28.878006  784631 status.go:384] host is not running, skipping remaining checks
	I0908 11:34:28.878024  784631 status.go:176] multinode-064020-m02 status: &{Name:multinode-064020-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (109.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-064020 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-064020 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.488257541s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-064020 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (109.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-064020
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-064020-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-064020-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.025487ms)

                                                
                                                
-- stdout --
	* [multinode-064020-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-064020-m02' is duplicated with machine name 'multinode-064020-m02' in profile 'multinode-064020'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-064020-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-064020-m03 --driver=kvm2  --container-runtime=crio: (46.834522558s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-064020
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-064020: exit status 80 (235.370886ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-064020 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-064020-m03 already exists in multinode-064020-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-064020-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.01s)

                                                
                                    
x
+
TestScheduledStopUnix (118.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-265118 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-265118 --memory=3072 --driver=kvm2  --container-runtime=crio: (46.894035477s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265118 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-265118 -n scheduled-stop-265118
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265118 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 11:40:24.670778  752332 retry.go:31] will retry after 118.583µs: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.671955  752332 retry.go:31] will retry after 183.324µs: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.673124  752332 retry.go:31] will retry after 269.859µs: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.674248  752332 retry.go:31] will retry after 320.141µs: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.675380  752332 retry.go:31] will retry after 438.683µs: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.676511  752332 retry.go:31] will retry after 975.139µs: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.677651  752332 retry.go:31] will retry after 1.451152ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.679908  752332 retry.go:31] will retry after 2.275753ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.683133  752332 retry.go:31] will retry after 2.618259ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.686350  752332 retry.go:31] will retry after 5.167006ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.692552  752332 retry.go:31] will retry after 3.114477ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.696729  752332 retry.go:31] will retry after 8.781152ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.705949  752332 retry.go:31] will retry after 6.690797ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.713202  752332 retry.go:31] will retry after 11.67137ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.725437  752332 retry.go:31] will retry after 26.834666ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
I0908 11:40:24.752704  752332 retry.go:31] will retry after 52.745679ms: open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/scheduled-stop-265118/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265118 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-265118 -n scheduled-stop-265118
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-265118
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265118 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-265118
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-265118: exit status 7 (77.102501ms)

                                                
                                                
-- stdout --
	scheduled-stop-265118
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-265118 -n scheduled-stop-265118
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-265118 -n scheduled-stop-265118: exit status 7 (67.02676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-265118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-265118
--- PASS: TestScheduledStopUnix (118.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (167.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1517783686 start -p running-upgrade-027540 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1517783686 start -p running-upgrade-027540 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m48.775548131s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-027540 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-027540 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.382841756s)
helpers_test.go:175: Cleaning up "running-upgrade-027540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-027540
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-027540: (1.006811811s)
--- PASS: TestRunningBinaryUpgrade (167.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (193.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-922776 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-922776 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.834894166s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-922776
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-922776: (2.307086113s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-922776 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-922776 status --format={{.Host}}: exit status 7 (79.508311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-922776 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-922776 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.913650199s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-922776 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-922776 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-922776 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (97.343128ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-922776] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-922776
	    minikube start -p kubernetes-upgrade-922776 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9227762 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-922776 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-922776 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-922776 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.360659668s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-922776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-922776
--- PASS: TestKubernetesUpgrade (193.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903924 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-903924 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (86.712723ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-903924] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (124.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903924 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903924 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m4.032784023s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-903924 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (124.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-312498 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-312498 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (102.175184ms)

                                                
                                                
-- stdout --
	* [false-312498] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:42:27.370547  789469 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:42:27.370786  789469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:42:27.370796  789469 out.go:374] Setting ErrFile to fd 2...
	I0908 11:42:27.370800  789469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:42:27.370996  789469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-748170/.minikube/bin
	I0908 11:42:27.371589  789469 out.go:368] Setting JSON to false
	I0908 11:42:27.372462  789469 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":73463,"bootTime":1757258284,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:42:27.372566  789469 start.go:140] virtualization: kvm guest
	I0908 11:42:27.374652  789469 out.go:179] * [false-312498] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:42:27.375798  789469 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 11:42:27.375827  789469 notify.go:220] Checking for updates...
	I0908 11:42:27.377789  789469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:42:27.379020  789469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-748170/kubeconfig
	I0908 11:42:27.380200  789469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-748170/.minikube
	I0908 11:42:27.381218  789469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:42:27.382182  789469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:42:27.383667  789469 config.go:182] Loaded profile config "NoKubernetes-903924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:42:27.383759  789469 config.go:182] Loaded profile config "force-systemd-flag-950564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:42:27.383839  789469 config.go:182] Loaded profile config "offline-crio-894901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:42:27.383932  789469 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:42:27.419462  789469 out.go:179] * Using the kvm2 driver based on user configuration
	I0908 11:42:27.420398  789469 start.go:304] selected driver: kvm2
	I0908 11:42:27.420411  789469 start.go:918] validating driver "kvm2" against <nil>
	I0908 11:42:27.420432  789469 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:42:27.422087  789469 out.go:203] 
	W0908 11:42:27.423064  789469 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 11:42:27.423944  789469 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-312498 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-312498" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-312498

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312498"

                                                
                                                
----------------------- debugLogs end: false-312498 [took: 2.954711704s] --------------------------------
helpers_test.go:175: Cleaning up "false-312498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-312498
--- PASS: TestNetworkPlugins/group/false (3.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (173.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3378326807 start -p stopped-upgrade-804516 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E0908 11:43:30.882298  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3378326807 start -p stopped-upgrade-804516 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m34.443087753s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3378326807 -p stopped-upgrade-804516 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3378326807 -p stopped-upgrade-804516 stop: (2.118856333s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-804516 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-804516 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m17.200749022s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (173.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (66.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903924 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0908 11:43:56.712440  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903924 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.773150506s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-903924 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-903924 status -o json: exit status 2 (287.999447ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-903924","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-903924
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-903924: (1.073090313s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (66.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (53.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903924 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903924 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.143763503s)
--- PASS: TestNoKubernetes/serial/Start (53.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-903924 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-903924 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.303084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (3.640809291s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.072985347s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-903924
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-903924: (1.47174758s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-804516
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-804516: (1.258230314s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestPause/serial/Start (129.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-152504 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-152504 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m9.127740469s)
--- PASS: TestPause/serial/Start (129.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (132.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m12.754686816s)
--- PASS: TestNetworkPlugins/group/auto/Start (132.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (97.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0908 11:48:30.883276  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m37.671685975s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (97.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-152504 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0908 11:48:39.784977  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-152504 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.279735168s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-312498 "pgrep -a kubelet"
I0908 11:48:53.327791  752332 config.go:182] Loaded profile config "auto-312498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-312498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v8sks" [17116ae6-d6f1-446f-a342-0f6ba17f19f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 11:48:56.712032  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-v8sks" [17116ae6-d6f1-446f-a342-0f6ba17f19f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005957153s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-gdgwh" [55682ae3-3dd2-40de-ab6f-731afac90227] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004736094s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-312498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-312498 "pgrep -a kubelet"
I0908 11:49:05.655774  752332 config.go:182] Loaded profile config "kindnet-312498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-312498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6pmnc" [b8bd0f7c-099b-4b50-8b63-1689eec80702] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6pmnc" [b8bd0f7c-099b-4b50-8b63-1689eec80702] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005656813s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestPause/serial/Pause (1.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-152504 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-152504 --alsologtostderr -v=5: (1.094942281s)
--- PASS: TestPause/serial/Pause (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-152504 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-152504 --output=json --layout=cluster: exit status 2 (284.951172ms)

                                                
                                                
-- stdout --
	{"Name":"pause-152504","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-152504","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-152504 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-152504 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-152504 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (279.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (4m39.009157251s)
--- PASS: TestNetworkPlugins/group/calico/Start (279.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-312498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (103.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m43.067182008s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (103.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (125.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m5.382042819s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (125.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m22.565229827s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-312498 "pgrep -a kubelet"
I0908 11:51:04.300222  752332 config.go:182] Loaded profile config "custom-flannel-312498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-312498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-btx5r" [12492418-3ac1-4deb-97f3-0e0fa7bf11ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-btx5r" [12492418-3ac1-4deb-97f3-0e0fa7bf11ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004645108s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-312498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (98.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-312498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m38.143871739s)
--- PASS: TestNetworkPlugins/group/bridge/Start (98.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-312498 "pgrep -a kubelet"
I0908 11:51:39.253781  752332 config.go:182] Loaded profile config "enable-default-cni-312498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-312498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5d9cr" [a859890f-0a56-469d-9e08-dcf4307473e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5d9cr" [a859890f-0a56-469d-9e08-dcf4307473e0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004671665s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-312498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (71.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-073517 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-073517 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m11.888555726s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (71.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nd7kf" [e1bb9e30-38fb-446d-8543-42e56d7cd439] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003877607s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-312498 "pgrep -a kubelet"
I0908 11:52:31.594782  752332 config.go:182] Loaded profile config "flannel-312498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-312498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w9sxk" [b078805c-0f0f-48eb-939b-b666574d4593] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w9sxk" [b078805c-0f0f-48eb-939b-b666574d4593] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.006218528s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-312498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-474007 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-474007 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m44.131648624s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-312498 "pgrep -a kubelet"
I0908 11:53:11.348247  752332 config.go:182] Loaded profile config "bridge-312498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-312498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pjcdf" [265ddac5-cdf0-48a6-9f6d-cd1d663145a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pjcdf" [265ddac5-cdf0-48a6-9f6d-cd1d663145a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004821299s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-073517 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f20a5e35-8bc1-468d-b4cf-5098d29bd2b9] Pending
helpers_test.go:352: "busybox" [f20a5e35-8bc1-468d-b4cf-5098d29bd2b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f20a5e35-8bc1-468d-b4cf-5098d29bd2b9] Running
E0908 11:53:30.882312  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/addons-451875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.032854164s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-073517 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-312498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-073517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-073517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.604529269s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-073517 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-073517 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-073517 --alsologtostderr -v=3: (1m31.826938669s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-256792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-256792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m28.710012305s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5c4jn" [23ae5670-dfcf-4959-a08f-282d691ad5d6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0908 11:53:53.526863  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:53.533316  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:53.544725  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:53.566146  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:53.607578  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:53.689179  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:53.851408  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:54.173291  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:54.815674  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:56.098085  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:56.712110  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/functional-461050/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-5c4jn" [23ae5670-dfcf-4959-a08f-282d691ad5d6] Running
E0908 11:53:58.660377  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006093536s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-312498 "pgrep -a kubelet"
I0908 11:53:59.195413  752332 config.go:182] Loaded profile config "calico-312498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-312498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l7f5g" [014e8123-0cee-4efd-ad81-2f61b43fb998] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 11:53:59.432473  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:59.438838  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:59.450183  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:59.471541  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:59.513149  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:59.594590  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:53:59.756250  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:54:00.077682  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:54:00.719733  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:54:02.002030  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:54:03.782658  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-l7f5g" [014e8123-0cee-4efd-ad81-2f61b43fb998] Running
E0908 11:54:04.563442  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005199703s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-312498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0908 11:54:09.685162  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-312498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-149795 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 11:54:34.506478  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:54:40.409358  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-149795 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m28.075479464s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-474007 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4e4bca6a-cf39-4ba4-bc50-f5020ad5a902] Pending
helpers_test.go:352: "busybox" [4e4bca6a-cf39-4ba4-bc50-f5020ad5a902] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4e4bca6a-cf39-4ba4-bc50-f5020ad5a902] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 14.00351092s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-474007 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (14.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-474007 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-474007 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-474007 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-474007 --alsologtostderr -v=3: (1m31.124507889s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073517 -n old-k8s-version-073517
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073517 -n old-k8s-version-073517: exit status 7 (79.085354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-073517 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-073517 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-073517 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.629493811s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073517 -n old-k8s-version-073517
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-256792 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f6ed86ff-d7cb-41e0-b8c0-5be7fd66273d] Pending
helpers_test.go:352: "busybox" [f6ed86ff-d7cb-41e0-b8c0-5be7fd66273d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0908 11:55:15.468318  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [f6ed86ff-d7cb-41e0-b8c0-5be7fd66273d] Running
E0908 11:55:21.370796  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 14.004581834s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-256792 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (14.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-256792 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-256792 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-256792 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-256792 --alsologtostderr -v=3: (1m31.054724126s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-v5nq2" [76777f3c-8a73-4d88-a4c1-237e85db0398] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-v5nq2" [76777f3c-8a73-4d88-a4c1-237e85db0398] Running
E0908 11:56:04.562648  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:04.569046  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:04.580436  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:04.601911  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:04.643356  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:04.724859  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:04.886433  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:05.208607  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:05.850644  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:07.132722  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:09.694042  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.004118144s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f7309204-a2be-4cc0-a01b-de13b6afd01e] Pending
helpers_test.go:352: "busybox" [f7309204-a2be-4cc0-a01b-de13b6afd01e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f7309204-a2be-4cc0-a01b-de13b6afd01e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 14.004588073s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-149795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-149795 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-v5nq2" [76777f3c-8a73-4d88-a4c1-237e85db0398] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003819966s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-073517 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-149795 --alsologtostderr -v=3
E0908 11:56:14.815917  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-149795 --alsologtostderr -v=3: (1m31.50863784s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-073517 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-073517 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073517 -n old-k8s-version-073517
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073517 -n old-k8s-version-073517: exit status 2 (248.055265ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-073517 -n old-k8s-version-073517
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-073517 -n old-k8s-version-073517: exit status 2 (252.734354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-073517 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073517 -n old-k8s-version-073517
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-073517 -n old-k8s-version-073517
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-549052 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 11:56:25.058217  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-549052 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (47.704362589s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-474007 -n no-preload-474007
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-474007 -n no-preload-474007: exit status 7 (80.553576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-474007 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (74.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-474007 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 11:56:37.389753  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/auto-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:39.509701  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:39.516159  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:39.527503  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:39.548865  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:39.590301  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:39.671759  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:39.833346  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:40.154758  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:40.796456  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:42.077915  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:43.293054  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/kindnet-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:44.639637  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:45.539849  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:56:49.762009  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-474007 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m13.90725332s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-474007 -n no-preload-474007
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (74.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256792 -n embed-certs-256792
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256792 -n embed-certs-256792: exit status 7 (69.269937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-256792 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-256792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 11:57:00.003916  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-256792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (58.953118083s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256792 -n embed-certs-256792
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-549052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-549052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.479239075s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-549052 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-549052 --alsologtostderr -v=3: (11.423858064s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-549052 -n newest-cni-549052
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-549052 -n newest-cni-549052: exit status 7 (79.282104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-549052 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0908 11:57:20.485498  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (49.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-549052 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 11:57:25.364344  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:25.370784  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:25.382262  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:25.403685  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:25.445090  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:25.527467  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:25.689132  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:26.011235  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:26.502232  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/custom-flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:26.652734  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:27.934839  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:30.496621  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:57:35.618507  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-549052 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (49.201185289s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-549052 -n newest-cni-549052
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (49.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795: exit status 7 (72.255852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-149795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-149795 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-149795 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m0.182901729s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (23.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tf5n8" [e1eaa3d4-a96e-4a0f-a2f7-e584a533807b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 11:57:45.860313  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tf5n8" [e1eaa3d4-a96e-4a0f-a2f7-e584a533807b] Running
E0908 11:58:01.446865  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/enable-default-cni-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.006479893s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (23.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rhjbt" [36ebfde5-e28c-4f54-b15d-fccc578093ab] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007458604s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rhjbt" [36ebfde5-e28c-4f54-b15d-fccc578093ab] Running
E0908 11:58:06.342701  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/flannel-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003920261s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-256792 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tf5n8" [e1eaa3d4-a96e-4a0f-a2f7-e584a533807b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004641324s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-474007 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-256792 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-256792 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-256792 --alsologtostderr -v=1: (1.010136172s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256792 -n embed-certs-256792
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256792 -n embed-certs-256792: exit status 2 (275.474898ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-256792 -n embed-certs-256792
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-256792 -n embed-certs-256792: exit status 2 (258.047331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-256792 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256792 -n embed-certs-256792
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-256792 -n embed-certs-256792
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-549052 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-549052 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-549052 -n newest-cni-549052
E0908 11:58:11.607776  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:11.614117  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:11.625425  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:11.646752  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:11.688108  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:11.769625  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:58:11.931191  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-549052 -n newest-cni-549052: exit status 2 (760.293418ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-549052 -n newest-cni-549052
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-549052 -n newest-cni-549052: exit status 2 (275.071761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-549052 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-549052 -n newest-cni-549052
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-549052 -n newest-cni-549052
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-474007 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-474007 --alsologtostderr -v=1
E0908 11:58:12.894727  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-474007 --alsologtostderr -v=1: (1.053702666s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-474007 -n no-preload-474007
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-474007 -n no-preload-474007: exit status 2 (289.391019ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-474007 -n no-preload-474007
E0908 11:58:14.176980  752332 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-748170/.minikube/profiles/bridge-312498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-474007 -n no-preload-474007: exit status 2 (823.152047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-474007 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-474007 -n no-preload-474007
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-474007 -n no-preload-474007
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-149795 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-149795 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795: exit status 2 (244.208303ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795: exit status 2 (248.919892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-149795 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-149795 -n default-k8s-diff-port-149795
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                    

Test skip (40/329)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 2.79
267 TestNetworkPlugins/group/cilium 4.11
282 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-451875 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-312498 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-312498" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-312498

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312498"

                                                
                                                
----------------------- debugLogs end: kubenet-312498 [took: 2.650439941s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-312498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-312498
--- SKIP: TestNetworkPlugins/group/kubenet (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-312498 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-312498" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-312498

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-312498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312498"

                                                
                                                
----------------------- debugLogs end: cilium-312498 [took: 3.949088286s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-312498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-312498
--- SKIP: TestNetworkPlugins/group/cilium (4.11s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-747650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-747650
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard